Are we sure that deepfake videos are not that dangerous?
Had we underestimated our ability to distinguish fake content?
November 26th, 2024
Artificial intelligence software that enables the creation of fake videos designed to deceive users has existed for years. In 2017, the term "deepfake" was coined to describe them, combining the word "fake" with "deep learning," one of the technologies underpinning the systems needed to produce such content. From the outset, the topic has been approached with considerable concern, especially regarding their role in spreading disinformation and political propaganda. These software programs typically aim to replicate a real person's voice and facial movements — often a well-known figure —so convincingly that it appears they are saying whatever the creator wants in the video. However, many deepfakes circulating online are still created using free, rudimentary software and contain noticeable flaws and errors that can sometimes be easily recognized. These defects often result in a distorted effect that is immediately apparent. Some of the most pessimistic predictions about the rise of deepfakes underestimated people's ability to identify manipulated clips. While the technical quality of AI-based systems has improved in recent years, the public's ability to distinguish authentic content from fake ones has also increased. But there's a caveat. On the other hand, the growing prevalence of manipulated videos—whether crude or sophisticated, for political or other purposes—has fostered skepticism toward videos in general, leading some users to treat real clips as if they were fake.
The widespread manipulation of content has essentially led to a rise in distrust, even unconsciously, among individuals toward video as a medium. The Atlantic addressed this aspect years ago, noting that growing doubts about the authenticity of videos could also be exploited for propaganda purposes. This already happens with the news. Former President Trump, for instance, is particularly skilled at labeling any inconvenient truth as "fake news," thereby preemptively discrediting the media. Similarly, the technological ability to create increasingly realistic deepfakes could, in the long run, lead people to trust the reliability of videos less overall. Today, videos are still generally considered strong evidence that something happened, but this association might lose importance in the future.
if i had to see joe biden in mugler drinking bud light so do u https://t.co/YayMmrSzef
— noah (@pradachurch) May 22, 2023
The ability to manipulate videos has always been a risk since the medium's inception. For this reason, some scholars have questioned whether AI tools truly allow for greater manipulation of information than was previously possible. The so-called "cheap fakes" illustrate this phenomenon well. The term refers to relatively simple and accessible video manipulation techniques, such as altering speed, length, or audio. For example, during recent phases of the U.S. presidential election campaign, political opponents of Joe Biden promoted clips taken from his official appearances, edited to make him appear lost or confused. This deceptive tactic reinforced a growing perception that Biden, due to his age, was not fully in control and lacked the clarity needed to lead a nation. Despite being rudimentary, the circulation of these videos fueled debates about Biden's physical and mental condition and paradoxically proved more deceptive and effective than traditional deepfakes. Often, AI-generated videos are far from convincing. This is evident in the comedy series Deep Fake Neighbour Wars, which uses AI to superimpose celebrity faces onto actors in surreal and grotesque scenarios. The result is not entirely flawless: the subjects are limited in facial expressions during performances, and the technology cannot yet handle movements like turning the head or moving too vigorously. Actors also cannot stand behind glass or act in the rain, as the AI requires a completely visible face to work effectively.
@itvx Don't mess with Idris's garden Kim. You've been warned Stream #DeepFake Neighbour Wars on #ITVX original sound - ITVX
The creators of Deep Fake Neighbour Wars were meticulous in ensuring that viewers would not mistakenly believe the celebrities were genuinely participating in the show. Each episode begins with a prominent disclaimer clarifying this, and the label "deep fake" remains visible in a corner of the screen throughout. Even when such warnings are omitted for deceptive purposes, it is often possible to discern whether a clip is real or fake by considering the context in which it was published. For instance, if a video is shared by a credible news outlet, this is often strong evidence that the event depicted occurred. This criterion has applied to videos long before deepfakes became widespread and continues to underpin the evaluation of other types of content, such as photographs. While AI systems may take years to become more sophisticated, as deepfake expert Hany Farid wrote in the Guardian, it is essential to already «develop good habits in our relationship with information, combining common sense with a healthy dose of skepticism.»