Fake news, conspiracies, manipulated media, selective reporting, facts derided as lies: citizens of democracies encounter wildly conflicting information about the supposedly factual world. Think of Pizzagate, a well-known and extensively discredited conspiracy theory about an abuse circle run by high-ranking Democratic Party officials. Or of the recent 2020 United States presidential election, where most Republican voters said they believed that Trump won and the official results were fraudulent. COVID-19 has produced its own myriad of false beliefs, for example that the virus was engineered in China. Even outright denials of the pandemic or other complex phenomena such as climate change are not uncommon. The borders between fact and opinion are blurring, and the prospects of deliberative democracy and collective action become increasingly bleak.
Of course, truth has always been precarious - neither misinformation nor widespread false beliefs are unique to our time. But the web and social media technologies have distinctly rearranged the struggles over truth and falsehood. Through its unfiltered publication process, breakneck pace, and obsession with engagement (rather than accuracy), the modern web has created not only decentralized modes of expression but also novel vectors for misinformation and disinformation. We therefore need new tools, concepts and technologies to ensure the integrity of information ecosystems. This workshop aims to bring together leaders across industry and academia to discuss what tools for measurement and mitigation could look like.
We invite researchers and practitioners to submit extended abstracts or short papers. Submissions are 2 to 4 pages in length, plus unlimited pages of references. Authors can choose to make their submissionarchival or non-archival. We invite submissions including (but not limited to):
Submissions are 2-4 page PDF's submitted via Easy Chair and should follow the AAAI format. Authors may include an appendix but submissions should be self-contained as reviewers are not asked to review appendices. Review will be blind and submissions should be reasonably anonymized. Accepted submissions will have the option of publishing their work in the proceedings of ICWSM 2021. For any questions, email zive@mit.edu
Topics that submissions might address include, but are not limited to:
We encourage submissions from across the political spectrum and welcome work treating information credibility from multiple perspectives.
Workshop papers submission: March 27, 2021 April 10, 2021
Workshop paper acceptance notification: April 21, 2021
Workshop final camera-ready paper due: May 1, 2021
Workshop will take place June 7, 2021
1
|
Time (EST) | Section | Activity | Title | Presenters / Authors |
---|---|---|---|---|---|
2
|
2:00 PM | Introductions | Maurice Jakesch, Manon Ravel, Ziv Epstein | ||
3
|
2:30 PM |
1. Understand
|
Keynote 1 |
News from The Election Integrity Partnership
|
Renee DiResta |
4
|
3:00 PM | Papers 1 | Disambiguating Disinformation: Extending Beyond the Veracity of Online Content | Keeley Erhardt and Alex Pentland | |
5
|
3:10 PM | The Firestarting Troll, and Designing for Abusability | Andrew Beers, Sarah Nguyen, Maya Sioson, Mariam Mayanja, Monica Ionescu, Emma Spiro and Kate Starbird | ||
6
|
3:20 PM | A Contextual Inquiry of The International Fact-Checking Network and Factuality on Social Media | Tarunima Prabhakar and Anushree Gupta | ||
7
|
3:30 PM | Break | |||
8
|
3:40 PM | 2. Measure | Keynote 2 |
Understanding and Reducing Misinformation Online
|
David Rand |
9
|
4:10 PM | Papers 2 | Fooled Twice – People Cannot Detect Deepfakes But Think They Can | Nils Köbis, Soraperra Ivan and Barbora Dolezalova | |
10
|
4:20 PM | Responsible algorithmic filtering on social media and its cost | Sarah Cen and Devavrat Shah | ||
11
|
4:30 PM | A Content-based Approach for the Analysis and Classification of Vaccine-related Stances on Twitter: the Italian Scenario | Marco Di Giovanni, Lorenzo Corti, Silvio Pavanetto, Francesco Pierri, Andrea Tocchetti and Marco Brambilla | ||
12
|
4:40 PM | Break | |||
13
|
4:50 PM | 3. Mitigate | Papers 3 | Introducing Credibility Signals and Citations to YouTube | Emelia Hughes, Renee Wang, Prerna Juneja, Tanushree Mitra and Amy Zhang |
14
|
5:00 PM | Towards a Unified Framework for the Design & Development of Digital News Credibility Tools | James Stomber, Dilrukshi Gamage, Bill Skeet, and Amy X. Zhang | ||
15
|
5:10 PM | The News Evaluator's Self-Assessment Tool: Training Adolescents in Civic Online Reasoning | Thomas Nygren and Carl-Anton Werner Axelsson | ||
16
|
5:20 PM | Group activity | Maurice Jakesch, Manon Ravel, Ziv Epstein | ||
17
|
5:50 PM | Wrapping up | Maurice Jakesch, Manon Ravel, Ziv Epstein |
Maurice Jakesch is an Information Science Ph.D. candidate at Cornell University and a fellow at the Cornell Tech Digital Life Initiative and the German National Academic Foundation. He studies how people reason about credibility and authenticity in a networked society. He explores how and why judgments of AI-generated content and political news are often misguided.
Manon Revel is an Social and Engineering System PhD candidate at MIT and a Hammer Fellow. Manon studies news credibility and the trust crisis of journalism. She also works on political behaviors and electoral systems. She created and led webradios programs and is passionate about supporting and enhancing journalism’s quality.
Ziv Epstein is a PhD student in the MIT Media Lab. His work integrates aspects of design and computational social science to understand and build cooperative systems. He focuses on new challenges and oppurtunities that emerge from a digital society, particularly in the domains of social media and artificial intelligence.