Deepfakes Attack! How Brands Can Fight Back

The Gist
- Growing threat. Deepfakes risk brands’ reputation through disinformation, fraud, requiring detection and proactive planning.
- Democratic danger. Deepfakes erode trust, manipulate elections — and threaten democracy’s foundations.
- Detection measures. Techniques including AI-driven platforms and manual clues can be utilized to spot and combat deepfakes.
The proliferation of AI-generated deepfakes poses serious risks for brands in the form of viral disinformation, fraud and reputational damage. As these manipulated images, videos and audio become more sophisticated, brands must deepfake-proof their businesses. This involves the use of deepfake detection platforms, social listening, incident response plans and simply learning how to spot fakes. Proactive deepfake contingency planning must become part of every brand’s social crisis strategy. Let’s explore some insights and best practices to future-proof brands against the prospect of deepfakes.
What Are Deepfakes and Why Are They a Problem?
Deepfakes aren’t exactly new at this point, after all, people have been using Photoshop to manipulate images for years. The difference is that now, through the use of artificial intelligence (AI), deepfake images, videos and audio can be created so effectively that it is practically impossible to tell them from the real thing. Consider this deepfake video of President Nixon giving a speech about the death of the Apollo 11 crew on the moon.
Creating a deepfake video involves teaching a computer to mimic the voice and appearance of a chosen individual, referred to as the “target.” This is done by feeding it an extensive amount of audio, video or pictures of the person that is to be imitated. Additionally, details are provided about a “source,” such as an actor who performs the specific actions or utterances that the target will seemingly enact.
Here is a deepfake of actor John Travolta, rather than Tom Hanks, in the role of Forrest Gump.
The process is accomplished by employing artificial neural networks. These networks operate in a way that resembles the human brain’s method of problem-solving. They examine evidence, detect underlying patterns, and use these patterns to process and apply new data. The result is a convincing imitation of the target, synthesized from the source’s performance. Although deepfake technology in and of itself is not much of a problem, the speed at which social media users spread deepfakes can lead to widespread beliefs that the deepfake is real, which can cause real-life problems.
Ricky Spears, CMO and founder of RickySpears.com, an internet and gaming tutorials and tips blog, told CMSWire that due to the way that deepfakes are created, obfuscation can be an effective technique to thwart their creation. “A large amount of data, usually in the form of photographs or videos, about the desired person whose resemblance is to be placed on another subject is necessary for a successful deepfake.”
Spears explained that this data is used to train the AI model and record different facial expressions and perspectives. “If you use a program that adds digital artifacts to videos in order to hide the pixel patterns that face recognition software uses, the deepfake algorithms will run more slowly and produce less accurate results, making it more difficult to deepfake successfully.”
There are many deepfakes that have been created for their entertainment value, such as this clip which put comedian Jerry Seinfeld in the role of the scared shooter in the classic movie Pulp Fiction.
Although it is an example of a deepfake that was well done, it is unknown if Seinfeld saw the humor in it. For celebrities who are themselves a brand, the use of their images in unauthorized movies and video clips is essentially theft and is becoming more of a problem every day.
The use of synthesized digital images and videos is part of the reason why the American actors’ union SAG-AFTRA went on strike over an ongoing labor dispute with the Alliance of Motion Picture and Television Producers (AMPTP), largely due to the studio usage of artificial intelligence to scan actors’ faces to generate performances digitally.
Related Article: Can We Fix Artificial Intelligence’s Serious PR Problem?
Deepfakes Are a Threat to Freedom and Democracy
Aside from the damage that deepfakes can do to businesses, deepfakes present a dire threat to the institutions and principles of democracy. By spreading false yet seemingly credible information, deepfakes can erode public trust, manipulate elections, destroy reputations and ultimately distort the truth. When citizens cannot distinguish the truth from fiction, the foundation of informed debate and consensus crumbles.
Dan Brahmy, co-founder and CEO at Cyabra, a social threat intelligence platform provider, told CMSWire that while fake content online has been present since the early days of the internet, deep fake technology uses AI and machine learning algorithms to take it to a new level, generating video and audio content that seems completely authentic, and has the potential to deceive millions of people. “In the hands of malicious actors or adversarial foreign threats, deepfake technology has the potential to cause major harm to companies, brands, governments, and to society in general,” said Brahmy.
Without a means of accountability through the ability to verify information, deepfakes introduce instability and divisiveness into the democratic process. Leaders lose legitimacy and democracy becomes susceptible to mass manipulation. Only through a combination of media literacy, legal protections, authentication systems and public awareness can the problems caused by deepfakes be reduced. Addressing the challenge of deepfakes is necessary to preserve functional democracies that are built on transparency and understanding.
“Currently, deepfakes exhibit a level of credibility never before seen, both in visual and auditory capabilities,” said Brahmy. “In recent years, deepfakes have been used to sway public opinion on major societal issues — war, elections, and financial crises. It has also been used to damage the reputation and tarnish the names of companies, brands, and celebrities.”
The use of deepfakes in the 2024 presidential election campaigns has already occurred when Florida Gov. Ron DeSantis’ presidential campaign released a video on social media that used deepfake images depicting Donald Trump hugging Dr. Anthony Fauci.
In August 2023, the Federal Election Commission began to take steps toward regulation of the use of deepfake material in political ads. Initially, they will seek public comment on whether existing federal rules against fraudulent campaign advertising can be applied to ads that use AI.
The use of deepfakes has real-world consequences in situations such as branding, politics, and in the following case, war. In March 2023, a deepfake video of Ukrainian president Volodymyr Zelensky was posted on social media, asking his soldiers to lay down their arms.
“Deepfake technology can be used to impersonate anyone, whether it’s a person close to us or the President. Other famous deepfakes included Mark Zuckerberg, Donald Trump, Barack Obama, Nancy Pelosi, Kim Jong-Un, and many others,” said Brahmy.
Related Article: FTC Won’t Tolerate Generative AI Deception in Marketing, Customer Service
Examples of Deepfakes in the News
One doesn’t have to look far to see the effects of deepfakes in the news. In May 2023, an AI-generated image of an explosion near the Pentagon was posted on Facebook and Twitter. This naturally caused panic and concern as it began to spread virally. This rapidly caused the stock market to dip by 0.26% before bouncing back.
In March 2023, a group of high schoolers made a racist deepfake of a principal threatening black students, shouting a string of racist slurs and threatening a mass shooting. More recently, news headlines about deepfakes are becoming normalized. Here is a sample of deepfake headlines over the course of a week:
AI-Driven Deepfake Detection Platforms
Recognizing the threat that deepfakes pose is the first step to protecting a brand and retaining the trust of customers. “However, many companies and brands fail to see the threat of fake content, malicious actors, and deepfakes,” said Brahmy. “Understanding the risks, monitoring relevant topics on social media and proactively tracking your online presence have all become inevitable in today’s reality, and require companies to acknowledge the threats emerging from the online sphere and regard them as a major part of their security and safety strategy.”
There are a number of AI-based software platforms becoming available to help brands detect deepfakes. These platforms often feature AI-driven functionality including facial recognition, voice recognition, image analysis, metadata analysis, and machine learning.
“The only way our detection abilities can keep up with technological progress is by leveraging the power of AI to fight AI: using advanced AI-powered detection tools to detect and take down harmful fake content,” explained Brahmy. “That is why it’s crucial that governments and private sector companies understand the potential harm and make use of the right detection tools to protect themselves and society against it.”
How to Tell if a Video Is a Deepfake
A 2022 Statista survey revealed that 57% of global consumers said that they could detect a deepfake video, while 43% said they would not be able to tell the difference between a deepfake video and a real video. Although deepfake videos are difficult to tell from real videos, there are a number of clues that observant viewers can use to detect them:
Look for areas in the video that are blurry or obscure, including hands and fingers, skin and hair, and faces that seem to be blurrier than other areas within the video. One can tell that something is off, often softer looking, or appears to be “filtered.” Does the lighting around the subject of the video match the lighting in other areas in the video?
Does the audio match up with the video? Deepfakes often use lip-syncing, which can appear unnatural. The audio might not sync with the person that is the focus of the video, similar to how the audio in old martial arts movies didn’t match the movement of the subject’s lips in the video.
Where did the video originate? Screenshots of the video can be used to search the web for the true origin of the video using a reverse image search. Tools such as ImageReverse and Google Images can be used to perform such a search. These tools enable the searcher to determine if the video was deepfaked from a legitimate video.
“As with most fake content online, deepfakes can be identified by paying close attention to details. At the moment, deepfakes can still be manually identified by examining the telltales: strange colors and shadows, poor audio quality, unnatural eye movements or postures, etc. However, it is important to note that Deepfake technology is in constant evolution,” said Brahmy, who added that these lightning-fast advancements will soon make manual detection abilities obsolete.
Users can practice their skills on the Detect Fakes website, which is an experiment by MIT to teach people how to detect deepfakes. Because most high-end deepfakes feature facial transformations, the Detect Fakes website suggests that to determine if a video is a deepfake, one should pay attention to areas of the face, such as the cheeks and forehead. One should also be on the lookout for skin that looks too smooth or too wrinkly. The age of the skin should be similar to the aged appearance of the hair and eyes. Also, keep an eye on dimensions, especially of the eyes and eyebrows. On deepfakes, shadows often appear in places that are unusual. Detect Fakes suggests other areas to focus on including:
- The glasses: Is there any glare, or even too much glare? Does the angle of the glare change with the movement of the subject’s face?
- Facial hair or lack thereof: Does it look natural? There may have been the addition or removal of mustaches, sideburns, or beards.
- Facial moles: Do they look natural?
- Blinking: Does it appear normal, not often enough, or too often?
With practice, one can develop the skills that are needed to be able to discern actual videos from deepfakes. A combination of these skills and deepfake detection platforms can enable brands to remain on top of the deepfake problem, protecting their brand image and public brand perception.
Final Thoughts on Deepfakes
Using the near-seamless imitation of voices, faces and content, deepfakes are a troubling source of disinformation and deception. This misuse of technology threatens brands, celebrities, politics and the fundamentals of democracy. Brands must adopt a proactive approach, employing deepfake detection tools, educating the public, monitoring their online presence and integrating contingency planning into their overall risk management strategy.