The rise of deepfake technology has brought forth significant implications for society and human behaviour. Deepfakes, which are manipulated or synthesized media content created using artificial intelligence, can convincingly depict individuals saying or doing things they never actually did. While deepfakes hold potential for entertainment and creative applications, they also pose challenges and risks, particularly in areas such as politics, entertainment, and pornography.
I chat with Todd C. Helmus, Senior behavioral scientist at the RAND Corporation and a nationally recognized expert on disinformation and violent extremism. He specializes in the use of data and evidence-based strategies to understand and counter disinformation and extremism. He has studied the disinformation threat posed by deepfakes, Russian-led propaganda campaigns targeting the United States and Europe, and the use of social media by violent extremist groups.
Deepfakes, the creation of highly realistic manipulated media, have become a growing concern in today’s digital age. In a recent episode of “Open Minds with Christopher Balkaran,” the host engaged in a thought-provoking conversation with Todd, an expert in the field. They discussed the believability of deepfakes, the risks associated with voice manipulation, the challenges of detection, and the intricate process behind creating these deceptive videos. This article aims to delve deeper into the topics explored in the conversation and shed light on the potential impact of deepfakes on society.
Are Deepfakes Believable?
Todd touched upon the believability of deepfake content. While some content may appear obviously fake to the discerning eye, others can be incredibly convincing. The ability to manipulate images, text, and even voices can exploit people’s vulnerabilities and exacerbate societal divisions. Factors influencing the believability of deepfakes include media literacy, familiarity with the technology, and the nature of the content itself.
Todd noted that some individuals are more prone to believe in deepfakes than others. Additionally, the level of familiarity with deepfake technology plays a significant role in determining the credibility attributed to such content. However, as deepfake technology continues to advance, the line between real and fake becomes increasingly blurred, making it challenging for individuals to distinguish between the two.
Voice manipulation emerged as another area of concern discussed in the conversation. Todd highlighted instances where deepfake voices were used for malicious purposes, such as convincing individuals to wire money or conducting fraudulent activities. The potential risks of voice manipulation extend to various spheres, including politics and the military. A manipulated voice recording of a political figure could easily be used to spread false information or manipulate public opinion during elections. In the military domain, voice manipulation could have grave consequences, with fake orders potentially leading to unintended military actions.
Deepfake creation involves the use of generative adversarial networks (GANs) comprising a generator and a discriminator. The generator creates the manipulated images, while the discriminator evaluates their authenticity. Through an iterative process, the generator continuously refines the images to make them more realistic. However, developing high-quality deepfakes requires substantial training data and expertise. Actors are often needed to mimic the mannerisms of targeted individuals, and post-production editing is necessary to address imperfections in the generated images.
Creating convincing deepfake videos can be a time-consuming process, taking months to complete. After the initial creation, each frame is meticulously reviewed to correct any remaining distortions. The cost associated with deepfake creation is also noteworthy, as specialized computer systems can amount to around $12,000.