Harnessing Input From Citizens and Generative Models to Combat Disinformation

The following article was written partly by ChatGPT-4 and edited by me, Ahmed Medien, and is directly pulled from/inspired by my keynote speech at the 2024 TNGO Symposium: The State of Global Affairs and Misinformation on March 2nd, 2024.

Note: The incessant reference to the word “citizen” in this article doesn’t refer to the legal definition of a citizen but rather simply a person/people

Harnessing Input From Citizens and Generative Models to Combat Disinformation

As we approach the mid-point of 2024, it’s been nearly a decade since a diverse group of experts, including behavioural scientists, social anthropologists, social scientists, and computational scientists, have united in their efforts to tackle the scourge of misinformation. Despite their dedicated work and policy recommendations, it’s difficult to claim that their collective outcry has resulted in a meaningful shift in the public’s trust in institutions. If anything, it might have inadvertently fueled further distrust.

The landscape of information has become increasingly fragmented, leading to a proliferation of information ecosystems and platforms. This fragmentation has made cultural consensus more elusive than ever before. No longer can we expect individuals to be seamlessly integrated into a universally accepted understanding of facts or truth.

The challenges posed by the online information environment demand creative approaches and a collaborative spirit among informed citizens. It’s essential for all people to engage in shaping policies that they perceive as fair, just, and equitable.

One promising avenue to address the proliferation of harmful misinformation lies in the concept of citizen science. This approach, combined with the rising capabilities of generative foundational models, holds potential for developing sophisticated strategies to combat disinformation.

Citizen Science: A Collaborative Response to Misinformation

Citizen science is not a new concept, but its application in tackling disinformation could be a relatively new groundbreaking approach to solving policy gridlock or the gaps between the sanctity of freedom of speech, and assembly and protecting users are exposed and participate in ‘safe’ digital spaces. This participatory approach to research involves the general public in scientific inquiry and problem-solving. It empowers individuals to contribute to large-scale projects, offering diverse perspectives that can lead to more democratic and inclusive solutions.

A notable example of citizen science in action was a citizens’ dialogue held in Tunisia in 2020 addressing complex topics such as internet governance, artificial intelligence, and the governance of online speech, including misinformation and dissent. The event was part of a global initiative undertaken by over 60 countries, highlighting the universal need for a coordinated response to the challenges posed by digital misinformation.

The Tunisian dialogue was one application of citizen science in creating a platform for both diverse and “all” voices. All voices here refer to the fact that representation wasn’t weighted. I, as the led of the dialogue, employed, however, a methodology that ensured broad and representative participation. This was achieved through targeted digital advertising and partnerships with community organizations, ensuring the inclusion of participants with disabilities and from various backgrounds. Facilitators were trained to manage small group discussions, ensuring that the deliberation happened simultaneously to capture the truest form of participants’ opinions. Pre- and post-debate surveys were conducted to gauge any significant shifts in perspective, providing valuable data on the effectiveness of the discussions.

The success of such citizen science initiatives hinges on their ability to foster an environment where all participants feel their voices are heard and considered. By engaging a cross-section of society in dialogue, these projects not only educate the public but also generate a wealth of data that can inform policy and decision-making.

The Role of Generative Foundational Models

We did not employ a generative model during the tabulation of the results of the dialogue. Instead we relied on percentages, correlations, comparisons across sample groups, etc. However, simple majorities can’t often inform decision-making processes for a complex subject that may even solicit contradictory answers when broken down to pieces. And, this is, perhaps, where generative models can play the strongest role where out of existing opinions, we may be able to extract newer ones that bring us slightly closer to the solution. The rise of generative foundational models could offer a new tech medium and computational algorithms to establish policies and new remedies to disinformation that are possibly closer to people’s already established opinions without having to compromise on this group’s principles or sacred facts or that group’s insular perception of what the problem is. These advanced machine-learning technologies have the potential to transform community consultations and decision-making processes. By analyzing vast datasets, generative models can detect coordinated disinformation campaigns, identify patterns in narrative spread, and predict the impact of proposed interventions.

For example, generative models can process diverse inputs on public opinion and behavioural patterns to forecast the consequences of new regulations on speech and information dissemination. This predictive capability is invaluable, as it allows policymakers to anticipate and mitigate any potential negative outcomes before they occur.

Generative foundational models can also enhance the democratic legitimacy of policy responses to misinformation. They do so by providing transparent and comprehensive data analysis, ensuring that decisions are made based on a thorough understanding of the issue at hand. This transparency is crucial in establishing trust between the public and institutions tasked with governing the digital information space.

The potential of generative models extends beyond data analysis. They can assist in creating educational content, simulate the effects of disinformation on public opinion, and help develop more effective communication strategies. As these technologies continue to evolve, their role in democratic governance and the fight against disinformation will likely become more pronounced.

Conclusion: A Pluralistic Approach to a Complex Challenge

The battle against disinformation is multifaceted and requires the combined efforts of technology, policy, and an informed citizenry. Citizen science initiatives and generative foundational models represent two powerful tools in this ongoing struggle. By engaging diverse populations in meaningful dialogue and utilizing advanced algorithms to analyze and predict trends, we can work towards a digital environment characterized by truth and trust.

As these efforts gain momentum, they present a beacon of hope in the quest for a more harmonious digital future. The effective use of citizen science and generative models can help us navigate through the misinformation maze, ensuring that the digital space remains a platform for credible information and constructive discourse.

Prev PostThe State of Online Speech in the EU: A Comparative Study of DSA 2023 VLOP Reports
Next Post

Leave a reply