logo

Examining Deepfakes and the Growing Threat of Synthetic Media

In March 2024, the National Association of State Chief Information Officers call brought together government and corporate IT leaders to address a pressing challenge: a new generation of AI-powered phishing attacks that target government entities in ways never seen before. These messages target private and government organizations with impeccable delivery, devoid of the telltale signs that once exposed phishing attempts, such as typos or formatting errors. Instead, they harness AI-generated deepfakes, capable of mimicking voices, faces, and gestures with alarming realism.

While deepfakes are not new, the ease and scale with which cyber actors are using them is unprecedented. Underlying AI technologies needed for the creation of deepfakes are now widely available at minimal cost and require little more than a personal laptop and basic technical proficiency to deploy. With technological advancements lowering the barriers to entry and deep learning-based algorithms readily available on platforms like GitHub, deepfakes are becoming increasingly common.

This phenomenon, flagged by the U.S. Government Accountability Office (GAO) as early as 2020, poses a considerable threat, as disinformation and fraudulent messages are disseminated with remarkable speed and sophistication. While the potential for exploitation and disinformation is concerning, these tactics are capable of influencing elections and can erode trust in public institutions.

Join us as we explore the creation and dissemination of deepfakes and examine how organizations can protect themselves from this evolving threat.

What are deepfakes?

Deepfakes are videos, photos, or audio recordings that seem real but have been created or digitally modified by Artificial Intelligence (AI). These manipulations, achieved with the use of machine or deep learning technology, can range from subtle alterations to full-fledged synthetic creations designed to depict individuals saying or doing things they never actually said or did. Through advanced AI algorithms, these tools can replace faces, manipulate facial expressions, and even fabricate speech with astonishing accuracy.

This ability to seamlessly graft one person’s actions onto another’s likeness opens the door to many potential misuses, from spreading disinformation to sowing confusion and discord. While deepfakes aren’t inherently malicious, their potential for exploitation is significant. Whether created by swapping faces in videos or altering speech in audio recordings, the result is often meant to deceive unsuspecting viewers and listeners.

How are deepfakes created?

Deepfakes are the creation of artificial neural networks modeled loosely on the human brain that can discern patterns within data. Through a process of training, where hundreds of thousands of images are fed into the network, these artificial neural networks learn to identify and reconstruct specific patterns, particularly those found in faces.

The technologies used to create deepfakes are varied. For example, autoencoders are artificial neural networks trained to reconstruct input from a simpler representation. On the other hand, generative adversarial networks (GANs) comprise two competing artificial neural networks, one generating fake content and the other striving to discern its authenticity. Over many cycles, the result is a gradual refinement of the deepfake, leading to the creation of remarkably plausible renderings of faces and voices.

What once was the domain of Hollywood special effects teams has now become accessible to virtually anyone with a computer and minimal technical proficiency. Thanks to freely available computer applications and online tutorials, creating basic deepfake content has become easier than ever. However, a higher level of technical skill is required for more sophisticated deepfakes, particularly those generated using GANs. Developing a truly realistic deepfake also demands hundreds of thousands of training images, often available only for celebrities and government leaders, who continue to be the most common subjects of these artificially created renderings.

Nevertheless, as artificial neural network technologies continue to evolve in tandem with computing capabilities, the barriers to entry are expected to diminish, underscoring the potential impact of deepfakes in an increasingly digital world.

How are deepfakes used?

Deepfakes have numerous benign applications. In the world of entertainment, commerce, and communication, the potential for legitimate uses is vast, from seamlessly integrating absent actors into films to enabling virtual try-ons for online shoppers. Speech synthesis and facial manipulation can also facilitate cross-linguistic communication, as demonstrated by videos of public figures like David Beckham or Indian politicians that have used deepfake technology with the subject’s consent to spread messages in multiple languages.

However, deepfakes are more commonly used for exploitation. According to a recent report published by Deeptrace, the majority of the deepfake content online is pornographic and tends to disproportionately victimize women. Deepfakes are also weaponized for disinformation campaigns, with the potential to sway elections, incite civil unrest, undermine public trust in audiovisual content, and even erode trust in legitimate evidence, highlighting the urgent need for effective countermeasures.

Even governments are not immune to the allure of deepfake technology, including recent reports of the U.S. military’s alleged use of deepfakes in overseas influence campaigns, raising questions about critical ethical considerations and potential security risks. While there may be rare circumstances where deepfakes could serve legitimate national security interests and advance foreign policy objectives, the implications for democracy and public trust are significant, prompting democratic governments to navigate complex considerations regarding efficacy, audience, potential harms, legal implications, and the preservation of trust in public institutions.

What threats do deepfakes pose?

As deepfake techniques become increasingly sophisticated, the potential for harm is significant, ranging from election interference and spreading misinformation about political, social, military, or economic issues to even undermining national security.

Since the advent of deepfakes, disseminating disinformation is easier than ever, while proving the authenticity of genuine media is becoming increasingly difficult.

Among other victims, government organizations often grapple with the erosion of trust in their digital communications and, as a result, in public institutions as a whole. This loss of confidence poses a significant challenge to organizations striving to maintain their credibility and can undermine the integrity of democratic processes.

According to Candice Rockell Gerstner, NSA Applied Research Mathematician who specializes in Multimedia Forensics, the threats posed by deepfakes extend beyond mere reputational damage to encompass even tangible risks to national security and organizational integrity. From the impersonation of leaders and financial officers to the facilitation of network breaches, the potential avenues for exploitation are manifold, underscoring the need for proactive measures to mitigate the impact of deepfake attacks on governmental operations and public trust.

How can organizations mitigate the threat of deepfakes?

Government and corporate IT leaders agree that security awareness training for end users is back near the top of government cybersecurity priorities, particularly to mitigate the impact of deepfakes and AI-generated phishing attacks on government operations.

Over the past few years, as AI-generated fraud has become harder to detect, there has been an increased industry push to move beyond traditional security awareness training for end users and adopt a more holistic set of measures to combat cyber attacks directed at people that many are referencing as Human Risk Management. As outlined by Forrester, effective human risk management includes “solutions that manage and reduce cybersecurity risks posed by and to humans through: detecting and measuring human security behaviors and quantifying the human risk; initiating policy and training interventions based on the human risk; educating and enabling the workforce to protect themselves and their organization against cyber attacks; building a positive security culture.” By empowering individuals to become actively engaged in identifying and reporting risks, organizations can fortify their defenses from within.

A key aspect of human risk management is retraining employees to detect the delicate nuances of a new generation of phishing attacks, particularly those powered by deepfake technology. Employees need the knowledge and skills necessary to authenticate the source and content of messages and identify inconsistencies in audio or video quality, such as mismatched lip-syncing, poor voice synchronization, unnatural movements, and other signs of media manipulation. Establishing processes for verifying message authenticity and encouraging staff to report deepfake content to the platforms hosting it are essential steps in bolstering organizational resilience against such threats.

In addition to retraining employees, organizations can leverage new enterprise technology tools powered by AI to detect deepfakes or authenticate genuine media. Detection solutions utilize AI technology to analyze audiovisual content and identify details that deepfakes fail to imitate realistically, such as color abnormalities, inconsistent blinking, or mouth movements that don’t match the audio.

Organizations can also implement authentication technologies that help verify the authenticity of media content and detect alterations. Organizations can protect their digital communications from fraud by embedding authentication features, such as digital watermarks or metadata, while creating media or by utilizing blockchain to create secure versions that cannot be altered without detection.

Future Challenges

Existing deepfake detection solutions face limitations in real-world scenarios, where variations in lighting conditions, facial expressions, or audio-video quality can impede accurate identification. Detection tools also require extensive and diverse training data sets that are often unavailable or difficult to acquire. As current tools cannot perform comprehensive and automated analyses that reliably detect deepfakes across various contexts, detection is often one step behind creators.

As detection techniques advance, so do the methods used to create deepfakes, leading to a perpetual cat-and-mouse game. “As AI has improved, deepfakes have gone from primitive to highly realistic, and they will only get harder to distinguish,” the authors of a recent study published by the Center for Strategic & International Studies (CSIS) remark, highlighting the evolving nature of deepfake technology and the challenges it poses for detection. As deepfake creators continually innovate to evade detection and employ increasingly sophisticated techniques, hallmarks of deepfakes, such as abnormal blinking, are expected to disappear gradually.

Another critical challenge in the fight against deepfakes is their effectiveness as disinformation tools even after detection. Detecting deepfakes may not be enough to prevent harm, as disinformation can still spread even after deepfakes are identified. Many viewers are unaware of their existence or may not take the time to verify the authenticity of multimedia content before sharing it.

Addressing the proliferation of deepfake media also creates complex legal and regulatory challenges. Laws or regulations aimed at combating deepfakes must take into account constitutional considerations, such as freedom of speech and expression, as well as the privacy rights of the subject. With the challenges of enforcing such regulations and the inconsistencies in standards major social media platforms implement for moderating deepfakes, achieving consensus on effective regulatory measures becomes increasingly challenging.

From the creation of hyper-realistic videos to the dissemination of disinformation campaigns, the rise of deepfakes presents a complex challenge to private and public organizations alike. AI-generated and manipulated media can pose a significant threat to cybersecurity, public trust, as well as democratic processes.

To mitigate the threat of deepfakes, organizations must adopt comprehensive human risk management strategies that encompass both cybersecurity awareness training and the utilization of advanced AI technologies designed for detection and authentication.

As future advancements in AI technology will continue to blur the lines between reality and fabrication, collaborative efforts can help establish effective measures to mitigate the impact of deepfakes and safeguard digital trust in an increasingly virtual world.

Springbrook Software's Privacy Policy has been updated, click  here   for more information.