By Catharine Li*
Artificial intelligence is transforming how newsrooms build trust with their audiences.
During the panel “Synthetic media and journalism: Avatars, image creation and manipulation” at the 26th International Symposium on Online Journalism, journalists and media experts shared perspectives on the challenges of detecting and disclosing AI-generated content while highlighting AI innovation to combat censorship and repression.
Robert Quigley, professor of practice in the School of Journalism and Media at the University of Texas at Austin chaired the final panel of the conference on March 28, 2025.
Craig Silverman, a national reporter at ProPublica, reflected on the rise of AI-generated images on Facebook. Compared to content he encountered in his reporting a decade ago, he said it is now easier to generate images that are more visually convincing to a large audience.
Silverman said AI “slop,” or low-quality, AI-generated imagery, generates strong emotional reaction, but also elicits significant engagement. Because Facebook offers creators a cash payout based on monthly engagement with their accounts, Silverman said the financial incentive to post AI-generated content could grow.
“In the area where fact checking is going away in the U.S. on Facebook, suddenly, they’re positioned to become part of this new and expanding content monetization program on Meta,” Silverman said.
While many people are aware of the AI tools available to quickly produce images, video and audio, Silverman said they are increasingly getting “baked into the information ecosystem.”
“A lot of us know about deep fakes, about AI-generated images, but that doesn't mean all of us know how to recognize them, and it doesn't mean that we think about that every time we're consuming media, and that's something that people really hit on and exploit,” Silverman said.
For Carlos Eduardo Huertas, director of the media platform CONNECTAS, the use of AI was “essential” during the 2024 presidential election in Venezuela.
In an environment of political polarization, two initiatives were created to push back against government disinformation and censorship campaigns, Huertas said. The collaborative work of journalists across the country resulted in the initiatives Venezuela Vota and #LaHoraDeVenezuela.
Since the contested re-election of President Nicolás Maduro, artificial intelligence has helped safeguard journalists’ identities without compromising the quality of reporting. The project, called Operación Retuit, utilizes a pair of AI-generated avatars known as “La Chama” and “El Pana” to present verified newscasts to audiences across social media.
“We’ve put artificial intelligence at the service of collective intelligence in an unprecedented effort of collaborative journalism in the region,” Huertas said.
The use of AI, Huertas said, is an innovative and safe way to respond to an environment in which “uncertainty and danger increase by the minute” for journalists.
Since the launch of Operación Retuit, Huertas said the content has generated positive attention among audiences and gained significant engagement despite restrictions in Venezuela on freedom of speech.
“Each new follower is a citizen we pull out of the quicksands of disinformation, and is exposed to a more diverse (variety) of media outlets to stay informed,” Huertas said.
Responding to the role of generative AI in an evolving media landscape calls for cross-industry collaboration, said Santiago Lyon, head of Advocacy and Education at the Content Authenticity Initiative.
At the heart of this effort to restore trust and transparency online is provenance, which seeks to establish the basic facts about the origins of digital content, Lyon said.
“Provenance really is about proving what things are as opposed to detecting what is false,” Lyon said.
To combat this challenge, Lyon presented the concept of Content Credentials, a tamper-evident “nutrition label” for online content that communicates the origin and engagement history of online content in an interactive format.
Lyon said the initiative works at every stage of the digital supply chain, from creation to dissemination. This involves collaboration with hardware manufacturers to establish the origins of a file upon creation, image editing tools to create a digital edit history and content management.
The open-source technology is developed by the Coalition for Content Provenance and Authenticity. Lyon said that while several organizations including Adobe, Google and Meta are beginning to implement Content Credentials, the initiative continues to emphasize audience and policy-maker education.
“We're not trying to be the arbiters of truth here,” Lyon said. “What we're trying to do is provide additional information so that consumers of news and really anything else online can make better informed decisions about what to trust.”
Claire Leibowicz, head of AI and Media Integrity at the Partnership on AI, posed the question of how AI policy could support media as recorders of reality.
Acknowledging the growing number of policies requiring the labeling of AI-generated content, Leibowicz said they often did not include guidance for media organizations on how to implement these labels.
In 2023, the partnership launched a framework for how to create and distribute synthetic media responsibly. Convening 10 partners across news media and civil society, the groups were tasked with creating case studies on how the guidelines were implemented in their respective organizations.
Leibowicz highlighted examples of how AI is being used by CBC News, which decided against using AI to conceal source identity, and BBC News, which utilized face-swapping technology to anonymize sources.
While both news organizations provided explanations on their respective decisions, Leibowicz said this particular example complicated the question of AI disclosure as being completely neutral.
“Simply disclosing the presence of AI is an imprecise form of transparency for audiences,” Leibowicz said. “You need that rich context about where something came from, how it was made, beyond just whether or not a certain technology was used.”
Quigley concluded the panel with a question on whether the panelists were optimistic towards winning a “battle” of trust in an era where untrustworthy information pervades the media landscape.
Lyon said he saw a tremendous opportunity for journalists to better explain how AI technologies work so readers and consumers could understand how to interact with them more effectively.
“We’re at the very beginning of what’s going to be a very long journey,” Lyon said. “AI is not going away.”
*Catharine Li is a sophomore studying international relations and global studies at the University of Texas at Austin. She is a senior news reporter for The Daily Texan, covering public safety, the environment and immigration.