Combating the Rise of Deepfakes and Synthetic Media

Combating the Rise of Deepfakes and Synthetic Media

Written by Tech Tired Team, In Technology, Published On
August 27, 2024
, 47 Views

How might we shield societies from the encroachment of synthetic media and the ever-evolving menace of “deepfakes“? These deepfakes, a sophisticated manifestation of synthetic media crafted through the ingenuity of artificial intelligence (AI), conjure hyper-realistic portrayals of individuals and events, sowing confusion across the digital landscape. Their burgeoning presence in recent years has ignited a myriad of inquiries concerning their impact on the integrity of journalism, the sanctity of fact-based reporting, and the propagation of both misinformation and disinformation.

In grappling with these multifaceted dilemmas, it becomes paramount to weigh the delicate balance between the inviolable right to free expression and the overarching necessity for public safety. It is equally critical that any policies devised to combat deepfakes delineate with precision the specific nature of content that falls within this categorization. Although detection technologies and provenance methodologies are advancing at a rapid pace, the sobering reality remains that they may never fully extinguish the pernicious potential of AI-manipulated content.

This landscape demands further scholarly inquiry into several pivotal areas: the ramifications deepfakes impose upon the field of journalism, the efficacy of content labeling in alleviating concerns surrounding deepfakes, including identifying which labeling strategies yield the most success,  the establishment of international protocols to verify the authenticity of content, and the development of educational strategies to equip the public with the insight to discern synthetic media.

The Challenge

For over a century and a half, manipulated imagery has been a tool of deception, yet it has ascended to a more insidious pinnacle with the advent of “deepfakes.” Coined in 2017, the term “deepfake” encapsulates audio and visual content meticulously altered by artificial intelligence (AI) to convincingly mimic a real individual, even when the actions or words depicted are fabricated. Deepfakes represent a more sinister offshoot of “synthetic media,” a broader category that encompasses AI-generated audio, images, text, and video. Ongoing discourse seeks to delineate synthetic audiovisual content from deepfake creations. As with many issues scrutinized by CNTI, achieving precise definitions remains a work in progress, a necessity when formulating policy.

Between 2022 and 2023, the prevalence of deepfakes on the internet surged tenfold. While some studies debate the direct harm attributable to such manipulated media, there is undeniable evidence of its impact in nations like Slovakia, the United Kingdom, and the United States. This trend is particularly disquieting given the unprecedented number of national elections slated for 2024 amidst growing global apprehension over threats to national stability. A notable instance from March 2022 featured a deepfake falsely portraying the Ukrainian President commanding a military surrender.

At present, crafting the most convincing deepfakes demands vast amounts of training data and considerable time. Consequently, malicious actors often resort to less resource-intensive methods, such as the disinformation strategies discussed in another CNTI primer. However, as technology evolves, deepfakes are becoming increasingly accessible and convincing.

Beyond the immediate risks posed by deepfakes, there is a burgeoning concern over the “liar’s dividend” — a phenomenon where the proliferation of falsehoods further erodes trust in legitimate news, the media, and government authorities.

Responses to the burgeoning threat of deepfakes have emerged across multiple domains. Numerous digital and online platforms — including Facebook, Instagram, TikTok, X, and YouTube — have instituted policies mandating the disclosure of AI-generated content in advertisements. Some have gone further, outright banning certain synthetic materials, developing training tools to combat these forgeries, and exploring methods to embed authenticity markers within content.

Also Read -  The Effect of Internet Usage On A Person's Physical Activity

News organizations, too, have initiated online training programs aimed at deepfake detection. Nevertheless, discussions with expert fact-checkers indicate that while deepfakes are indeed a concern, more significant threats arise from content taken out of context — so-called “decontextualized” media — as well as other manipulated media forms, such as “cheap fakes,” which twist information into misleading narratives.

Over recent years, methods to detect synthetic media have advanced significantly. These detection technologies scrutinize shadows, geometry, pixels, and audio anomalies within suspected synthetic content and seek out hidden watermarks to verify authenticity. Despite these advancements, the challenges in detecting synthetic media have led researchers and the public to innovate new strategies for identifying deepfakes. While these techniques may never eliminate the dangers posed by synthetic media, they represent crucial steps toward safeguarding the public’s access to authentic information. To optimize these detection methods, further collaboration is essential across various sectors, including technology, communications, policy, and government.

Globally, most nations have yet to enact specific legislation addressing synthetic media. Among those that have, policies generally fall into two categories:  outright bans on deepfake content created without the consent of the individuals depicted or requirements for the disclosure and labeling of deepfake content. The regulation of deepfakes presents a particularly intricate challenge, complicated by many nations’ legal protections for freedom of speech and expression.

What makes it complex

What makes it complex

The Complexity of Classification

Precisely defining what constitutes a deepfake is both challenging and essential. Governments and researchers are grappling with the task of distinguishing deepfakes from other forms of synthetic media. One key question is whether the content must be inherently deceptive to qualify as a deepfake. Other crucial factors in this definition include intent, harm, and consent.

Consider, for example, a 2019 synthetic video featuring soccer star David Beckham speaking in nine languages. 

The purpose of this video was to spread accurate information about malaria, using deepfake technology to make the multilingual dialogue appear genuine. Although the intent behind this deepfake was not to deceive or cause harm, it is still widely classified as a deepfake due to the methods employed in its creation. Conversely, manipulated images, which have been in existence for over 150 years, can be intentionally deceptive, yet they do not require AI to achieve their effect, unlike AI-generated deepfakes.

The complexity of synthetic media deepens with the introduction of “shallow fakes” and “cheap fakes,” forms of manipulated content that do not necessitate advanced technology. Distinguishing what falls under the broad category of synthetic media versus what precisely qualifies as a “deepfake” (a subset of synthetic media) is crucial. This clarity is needed to differentiate between benign and beneficial uses of synthetic media—such as those for education and entertainment—and those that are harmful.

Expanding Access to Deepfake Detection

While the development of software to detect and counter deepfakes demands significant digital infrastructure and financial resources—resources that only a select few countries can afford—emerging labeling and disclosure tools are democratizing the fight against deepfakes on a global scale.

Creating standalone software capable of identifying and mitigating deepfakes is a costly endeavor. However, more accessible tools are becoming available to help determine whether media content has been tampered with. One promising approach involves the integration of trained human graders alongside pre-trained AI detection models. Researchers have found that combining these methods can offer distinct advantages over-relying on a single detection technique.

Also Read -  Food processing efficiency is enhanced by wire mesh conveyors

In response to the surge in deepfakes, content creators and the tech industry are also pioneering methods to tag and label manipulated media. These efforts include both direct and indirect disclosure techniques aimed at preserving transparency and verifying provenance—that is, confirming the authenticity of content. Among the various labeling strategies, watermarking stands out as a key technique. Watermarks can either be visible to users or subtly embedded within media to affirm their authenticity. Establishing a global standard for such labeling practices is imperative.

The Coalition for Content Provenance and Authenticity (C2PA) has emerged as a potential standard in this area, garnering support from major organizations such as Adobe, Google, Intel, and Microsoft. While technological tools, along with disclosure and labeling requirements, are crucial in the battle against deepfakes, they alone cannot eliminate all forms of misinformation and disinformation from the news landscape. Therefore, it is essential to develop strategies to mitigate threats from all sources to uphold the integrity of fact-based reporting.

Balancing Deepfake Regulation with Free Speech

Efforts to regulate deepfake content must be carefully aligned with laws that safeguard freedom of speech and expression. Governments face the challenging task of determining where to draw the line between permissible and prohibited deepfake content. In nations where free speech is legally protected, deepfakes present a unique dilemma in distinguishing between what constitutes legal or illegal material.

In many cases, the right to share false information falls under the umbrella of protected speech. This reality complicates any attempt to impose an outright ban on deepfake content, as such a ban could infringe on legal protections for free expression. Consequently, regulating deepfakes becomes a particularly complex endeavor, requiring a nuanced approach that respects fundamental rights while addressing the potential harms posed by these manipulative technologies.

Ethical Dilemmas in Using Synthetic Media for Journalism

Synthetic media, including deepfakes, offer journalists new ways to protect their identities in dangerous situations by altering their appearance and voice. These technological advances can be precious when working on sensitive or high-risk projects. However, this practice raises significant ethical concerns, as it conflicts with the core principles of honesty and transparency that are central to many news organizations’ codes of ethics.

While some news outlets may allow for the use of deception or anonymity in rare circumstances—typically when issues of public interest or personal safety are at stake—such practices are exceptions rather than the norm. The ethical challenge lies in determining when, if ever, it is appropriate for journalists to employ deepfakes in their reporting. Balancing the need for protection with the commitment to ethical integrity is a critical consideration that requires careful deliberation within the journalism community.

Current Research on Deepfakes

Although deepfakes are a relatively new technological development, considerable research has already been conducted on how individuals interpret false information across different media formats, including text, audio, and video. A recent study revealed that even when participants were forewarned about the presence of deepfake content, nearly 80% failed to correctly identify the sole deepfake among a series of five videos. Other research shows that up to half of respondents in nationally representative samples are unable to distinguish between manipulated and authentic videos. These findings underscore the challenge of countering deepfakes and highlight the necessity of correcting false information to ensure a well-informed public.

Also Read -  Leased Line vs Broadband: Which is Better for Your Business?

There are, however, areas of optimism based on evidence. Research suggests that people can be trained to detect deepfakes better. Interventions that emphasize the importance of information accuracy and the ease with which consumer-grade deepfakes can be produced—now achievable in mere minutes using various apps and websites—have shown promising results in combating the negative impacts of such content.

In addition to training individuals to detect deepfakes, another strategy involves studying how people respond to fact checks and labels on manipulated media:

  • Fact-checking false statements made by politicians has been shown to reduce the belief that these statements are true. However, more research is needed to understand how partisanship influences the interpretation of synthetic media.
  • Developing digital media literacy programs, particularly those focused on identifying false information, will likely play a critical role in helping individuals recognize high-quality, fact-based news.
  • While labeling false information is beneficial, it also has potential drawbacks. Broad, generic disclosures about misinformation can lead viewers to distrust true, accurate news. As the public becomes more accustomed to seeking out labels and other disclosures to verify the authenticity of information, this could either reinforce or undermine their trust in the media.
  • Unlabeled false information tends to be perceived as more accurate than false information that has been tagged as such, suggesting that while labels can be influential, they must be comprehensive and applied to all relevant synthetic media.

Research has also explored the value of asserting provenance—tracking the authenticity and origin of content. While the general public does not widely understand the concept of provenance, studies have shown that it can decrease trust in deceptive media when presented effectively. Further education on the significance of provenance in supporting a fact-based news ecosystem is necessary.

Future research should continue to explore how individuals interact with synthetic content and how persuasive they find it, especially as these media become increasingly realistic and lifelike. Given the growing prevalence of synthetic media, researchers should also investigate which techniques—including labeling and disclosure—are most effective in mitigating the adverse effects of deepfakes. Understanding how to respond to and “treat” individuals who have encountered deepfake content is crucial for supporting fact-based news efforts. Finally, it is essential to study how newsrooms will address the proliferation of deepfakes and the potential harms they pose.

Conclusion

While the challenges posed by deepfakes and synthetic media are significant, ongoing advancements in detection technologies, such as the development of tools like the chatgPT detector, offer hope in mitigating their impact. By continuing to refine these technologies, alongside fostering global standards for content labeling and provenance, societies can better protect themselves from the spread of misinformation and disinformation. However, it is equally important to educate the public and ensure that efforts to regulate deep fakes are balanced with the preservation of free speech and expression. The evolving landscape of synthetic media demands a proactive and multifaceted approach to safeguard the integrity of information in the digital age.

Related articles
Join the discussion!