Deepfakes: Believability, Vulnerability, Detection with Todd C. Helmus

Government Bureaucracy

  • Listen on Apple Podcasts
  • Listen on Google Podcasts
  • Listen on Spotify Podcasts
  • Listen on Amazon Podcasts

The rise of deepfake technology has brought forth significant implications for society and human behaviour. Deepfakes, which are manipulated or synthesized media content created using artificial intelligence, can convincingly depict individuals saying or doing things they never actually did. While deepfakes hold potential for entertainment and creative applications, they also pose challenges and risks, particularly in areas such as politics, entertainment, and pornography.

I chat with Todd C. Helmus, Senior behavioral scientist at the RAND Corporation and a nationally recognized expert on disinformation and violent extremism. He specializes in the use of data and evidence-based strategies to understand and counter disinformation and extremism. He has studied the disinformation threat posed by deepfakes, Russian-led propaganda campaigns targeting the United States and Europe, and the use of social media by violent extremist groups.

Deepfakes, the creation of highly realistic manipulated media, have become a growing concern in today’s digital age. In a recent episode of “Open Minds with Christopher Balkaran,” the host engaged in a thought-provoking conversation with Todd, an expert in the field. They discussed the believability of deepfakes, the risks associated with voice manipulation, the challenges of detection, and the intricate process behind creating these deceptive videos. This article aims to delve deeper into the topics explored in the conversation and shed light on the potential impact of deepfakes on society.

Are Deepfakes Believable?

Todd touched upon the believability of deepfake content. While some content may appear obviously fake to the discerning eye, others can be incredibly convincing. The ability to manipulate images, text, and even voices can exploit people’s vulnerabilities and exacerbate societal divisions. Factors influencing the believability of deepfakes include media literacy, familiarity with the technology, and the nature of the content itself.

Todd noted that some individuals are more prone to believe in deepfakes than others. Additionally, the level of familiarity with deepfake technology plays a significant role in determining the credibility attributed to such content. However, as deepfake technology continues to advance, the line between real and fake becomes increasingly blurred, making it challenging for individuals to distinguish between the two.

Voice manipulation emerged as another area of concern discussed in the conversation. Todd highlighted instances where deepfake voices were used for malicious purposes, such as convincing individuals to wire money or conducting fraudulent activities. The potential risks of voice manipulation extend to various spheres, including politics and the military. A manipulated voice recording of a political figure could easily be used to spread false information or manipulate public opinion during elections. In the military domain, voice manipulation could have grave consequences, with fake orders potentially leading to unintended military actions.

Deepfake creation involves the use of generative adversarial networks (GANs) comprising a generator and a discriminator. The generator creates the manipulated images, while the discriminator evaluates their authenticity. Through an iterative process, the generator continuously refines the images to make them more realistic. However, developing high-quality deepfakes requires substantial training data and expertise. Actors are often needed to mimic the mannerisms of targeted individuals, and post-production editing is necessary to address imperfections in the generated images.

Creating convincing deepfake videos can be a time-consuming process, taking months to complete. After the initial creation, each frame is meticulously reviewed to correct any remaining distortions. The cost associated with deepfake creation is also noteworthy, as specialized computer systems can amount to around $12,000.

How Hard is it to Detect Deepfakes?

Detecting deepfakes presents significant challenges for large platforms like Facebook, Google, and Twitter. Although detection capabilities exist, they are often inadequate in identifying new types of deepfakes, especially those created using the latest technology. Todd mentioned the limitations of current detection systems, with even the best detectors only achieving a 65% success rate in identifying brand-new deepfakes. As deepfake technology improves, the effectiveness of detectors diminishes, leading to a worrisome scenario where perfect deepfakes become indistinguishable from reality.

To address this issue, platforms rely on detection systems integrated into their algorithms. However, the sheer volume of content shared daily makes it difficult for these systems to scrutinize every video. Consequently, platforms need to invest in advanced detection technologies to keep pace with the evolving landscape of deepfakes.

What’s the Role of Deepfakes in Political Polarization?

The issue of polarization in society is already prevalent, and deepfakes have the potential to further entrench individuals in their existing echo chambers. People’s pre-existing beliefs and schemas play a crucial role in determining how they interpret and respond to deepfake content. The article explores the ways in which deepfakes can reinforce existing biases and contribute to the polarization of public discourse. Additionally, it raises the question of whether deepfakes have the potential to change people’s behavior and opinions or if they merely strengthen existing viewpoints.

The ethical and legal implications of deepfake technology cannot be ignored. While some countries have implemented legislation to address deepfake-related concerns, such as prohibiting deepfake use during elections or unauthorized creation of deepfake pornography, these laws raise questions about the balance between freedom of speech and the potential harm caused by manipulated content. The article discusses the challenges in regulating deepfakes, especially in countries where freedom of expression is protected by constitutional rights.

The Way Forward

Deepfake technology has emerged as a powerful tool with both positive and negative implications. While the entertainment industry and creative applications may benefit from this technology, it also carries significant risks for political manipulation, misinformation, and privacy violations. As deepfakes become more sophisticated, it is crucial to address the ethical, legal, and societal challenges they pose. Balancing freedom of expression and the need to protect individuals from the harmful effects of manipulated media is a complex task that requires ongoing research, dialogue, and regulation. Ultimately, society must find ways to mitigate the negative impact of deepfakes while harnessing their positive potential responsibly.