Deepfake Dangers on Social Media
Deepfakes are a kind of synthetic content which rely on deep learning algorithms such as the Autoencoder and Generative Adversarial Network (GAN) framework that are part of generative AI solutions. As the use of generative AI tools has increased, deepfakes have become easier to create and harder to discern.
As an exercise, in early 2023 Wharton professor, Ethan Mollick, created a deepfake of himself. It took him eight minutes and $11 to make. Small wonder that North America alone saw a 1740% increase in the number of deepfakes detected between 2022 and 2023.
While we need to seriously consider the threat of deepfakes, it’s important to note that not all deepfakes are malicious. Take, for example, the Elvis Presley deepfake that was part of the 2022 season finale of America’s Got Talent or the deepfake the Dali Museum used to bring Salvador Dali back to life for a more engaging and interactive experience.
You may have heard the “Heart on my Sleeve” deepfake that featured the voices of Drake and the Weeknd, or seen the deepfake video of Taylor Swift endorsing Le Creuset cookware, or watched a deepfake version of Jennifer Aniston offering $10 MacBooks. But, undoubtedly, many (if not most) deepfakes are created with nefarious purposes in mind
Deepfakes are of high concern to national governments. In the “Annual Threat Assessment of The U.S. Intelligence Community,” issued by the Office of the Director of National Intelligence on February 5, “rampant deepfakes” are among the challenges posed by generative AI. As a specific example, the report notes, “Russia is using AI to create deepfakes and is developing the capability to fool experts. Individuals in warzones and unstable political environments may serve as some of the highest-value targets for such deepfake malign influence.”
In its paper, “The Increasing Threat of Deepfake Identities,” the U.S. Department of Homeland Security wrote, “In the past, casual viewers (or listeners) could easily detect fraudulent content. This may no longer always be the case and may allow any adversary interested in sowing misinformation or disinformation to leverage far more realistic image, video, audio, and text content in their campaigns than ever before.”
Most often, these deepfakes are disseminated over social media. It’s estimated that there are 51.7 billion total social media users worldwide and that they use an average of 6.7 social networks each month. Using social media allows deepfake creators to reach the widest possible audience or to specifically target a particular audience segment – whichever is most effective for achieving their false communication goals. Since most users cannot distinguish modern-day deepfakes from genuine people (our human senses alone, no matter how highly attuned, are easily tricked by sophisticated AI), the billions of people on social media who view, like, share, or re-post deepfakes become unwitting accomplices to spreading false and potentially damaging content.
Media insight and research firm Fenimore Harper Communications found that between December 8, 2023 and January 8, 2024, over 100 deepfake video ads impersonating British Prime Minister Rishi Sunak were paid to be promoted on Meta’s platform. These ads, whose funds originated from 23 different countries, may have reached over 400,000 people.
It’s not just celebrities, politicians, and governments that are affected by the rise in malicious deepfakes on social media and spread through social engineering. Businesses are also among the latest victims.
What are the dangers of deepfakes on social media for business?
Deepfakes have become one of those things that keep CISOs up at night. Employees can be convinced to do things they normally wouldn’t through deepfakes used on social media and in social engineering attacks through email, video conferencing, IM, and other regularly used communications tools.
CNN tells the story of a finance worker at a multinational firm who was tricked into paying $25 million to fraudsters after he was duped into attending a video conferencing call with whom he thought were colleagues. Everyone in the meeting apart from him was a deepfake, including the Chief Financial Officer who ordered the transfer.
Another nightmare is deepfake videos of key executives, sales representatives, or product experts that can affect stock prices and competitive position in the market. When fraudsters need only one to three minutes of real video to create a deepfake, any shareholder presentation, company video, event appearance, TED Talk, or personal video posted on social media can be put to bad use.
Here are three common dangers that deepfakes pose for business, including identity theft, damage to reputation, and influence on misinformation.
Identity theft
As we can see in the $25 million transfer described above, the fraudsters were able to steal the identity of several employees, including the CFO. A bad actor can easily research a target company, including business processes, executives, employees, and anything it has recently announced – all of which can be used to inform a scam, such as a merger. Fraudsters not only look for videos online that they can use to create video and audio for their schemes; they research social media profiles for details to make the deepfake scripts seem more realistic. Public information or social media posts of an employee’s child graduating high school, for example, or a parent’s death, or a new pet – though seemingly innocuous – can become harmful details when in the hands of a bad actor.
This power to impersonate key staff members leaves businesses vulnerable to any fraudster’s goal. Whether an illicit money transfer or access to the business itself is the end game, deepfakes are dangerous and often insidious.
Damage to reputation
The power of social media to spread news quickly and widely gives fraudsters the ability to harm a business’s reputation in the time it takes for the company to discover the deepfake and begin to mitigate the crisis. These videos could attack the quality of a product or service, the truthfulness of the company, the character of its employees, or customer satisfaction. Or, it could also be designed to both damage the company’s reputation and defraud it of money.
Businesses have, historically, been reputationally damaged by the mere implication of impropriety on behalf of an employee or leader – and the same holds for (far worse) a deepfake video or voice recording that emerges of an associated party doing or saying something offensive. Video evidence takes a suggestion to the next level by showing “evidence” of inappropriate behavior.
A particularly virulent example would be a video claiming that the company was liable for a product that malfunctioned and caused an injury. First the fraudster identifies a product that has caused injury in the past and reviews state tort laws to see if a quick settlement is possible. Then they find online video of the product malfunctioning, use it to create a deepfake of themselves being injured, release the video on social media, and receive a wave of support.
The company’s social media team sees the post, reaches out, and gets a request for compensation for the injuries. If the company can’t quickly determine whether the video is a deepfake, they may end up paying the fraudster in an attempt to end the incident, stop the spread of the video, and mitigate the impact on company brand.
Influence on misinformation
Deepfakes can also be designed to negatively affect a company’s stock price, market penetration, or competitive position. They can also be used to target prospective partnerships or mergers and acquisitions.
Similar to an identity theft deepfake, this can be a video of the CEO or CFO announcing false deals, results, or future plans. A recent example of this kind of deepfake was a video purported to be Sundararaman Ramamurthy, CEO of India’s BSE stock exchange, giving investment and stock advice.
A fraudster looking to make a quick profit could purposely manipulate stock price. In addition to creating a video of the CEO, they would create several deepfake profiles on stock market forums, each purporting to be employees of the company. These “employees” discuss the pending announcement and the fraudster posts the announcement on social media. The bad actor hopes the stock price will spike and that they can cash out. The disinformation may cause other stockholders to sell and, once the fraud is discovered, it can negatively impact the company’s reputation.
Better defending against these AI-powered attacks means incorporating AI into online security efforts, starting with strengthening the methods used to authenticate employees and customers.
How to enhance security
When it comes to deepfakes, strengthening your ability to ensure that customers and employees are who they say they are, and that content is legitimate, should be a foundation of your organization’s cybersecurity efforts.
These methods apply both to general and account access and include limiting access to the company’s social media accounts only to people who really need it.
Create a stronger password
Passwords are the least secure form of authentication. However, employees and customers are quite used to them. To increase their effectiveness, passwords must be unique, complex, and exclude any personal information that a fraudster can find on social media or within personally identifiable information (PII) available on the dark web.
Passwords are often reused because they can be hard to remember. That very reuse is what leaves a business vulnerable to credential stuffing attacks, where hackers attempt to use passwords stolen in other data breaches to gain systems or account access.
Password complexity depends on both the number of characters and the types of characters themselves. As an example, Hive Systems estimates that, depending on the hardware a hacker uses, an eight-character password that contains only numbers can be hacked in 37 seconds – but if it contains upper and lowercase letters, numbers, and symbols, it can take seven years to hack.
Just as hackers can find the personal details online and in social media to increase the plausibility of their deepfakes, they can also find pet names, children’s names, key dates, addresses, and other elements that are frequently used in passwords.
Set up Two-Factor Authentication (2FA)
Incorporating an additional authentication factor increases online security. Social media two-factor authentication goes beyond passwords to incorporate something a user has (a device, like a smart phone) or something the user is (a biometric, like a facial scan or fingerprint).
Often, multi-factor authentication means sending a four- or six-digit code to a user’s phone or email. While better than a stand-alone password, an OTP (one-time code) can also be hacked, stolen, or spoofed, just as the password can.
A better approach is to use a biometric factor (fingerprint, voiceprint, or facial scan). As fraudsters increasingly use AI to try and impersonate legitimate users, today’s most successful biometric authentication solutions also use next-gen, anti-fraud AI to identify and reject deepfake voice samples, videos, and facial images.
Biometric factors add a layer of security to MFA: they can’t be stolen, forgotten, or lost. They are convenient to use, add minimal friction to the authentication process, and are basically impossible to hack.
Learn more about how Daon’s biometric authentication solutions provide a better defense against deepfakes.