Safety Deepfake attacks: Growing threat to the industry

A guest post by Ulf Baltin* | Translated by AI 2 min Reading Time

Related Vendors

Deepfakes, deceptively realistic AI-generated content, pose a growing threat to companies. They are increasingly used for cyber fraud. What preventive measures are there to minimize the risks?

Blackberry highlights the threat potential of AI-supported attacks on companies and presents countermeasures.(Image: freely licensed /  Pixabay)
Blackberry highlights the threat potential of AI-supported attacks on companies and presents countermeasures.
(Image: freely licensed / Pixabay)

Technologies based on artificial intelligence have been making rapid progress recently. Consequently, a particular form of digital deception is gaining increasing importance: deepfakes. These highly realistic, synthetic imitations created using AI pose significant challenges for companies and public organizations in terms of cybersecurity.

What are deepfakes?

Deepfakes are AI-generated videos, images, or audio files that are manipulated so convincingly that they are hardly distinguishable from real recordings. While the technology does find creative and artistic applications, it also carries significant potential for misuse.

The applications of deepfakes are diverse. In the entertainment industry, for example, they are used to bring historical figures back to life or digitally rejuvenate actors. Deepfakes are also used in private settings, such as animating old family photos.

The dark side: Deepfake fraud

The threat posed by deepfakes is steadily increasing. Cybercriminals are increasingly using the technology for sophisticated fraud schemes that can cause significant financial damage and massively harm the reputation of a company or individuals.

A particularly alarming example occurred in February 2024, when a finance employee of a multinational company was deceived by a deepfake video. In a manipulated video conference showing supposed colleagues and the Chief Financial Officer, he was tricked into transferring 25 million dollars to the cybercriminal creators of the video. Voice imitations using AI are also being used for fraud. Recently, a Ferrari manager received fraudulent WhatsApp messages supposedly from the Chief Executive Officer, requesting a secret currency transaction. Only through critical questioning was the attempted fraud uncovered before any damage could be done.

Identification and countermeasures

Detecting deepfakes is becoming increasingly difficult, as there is no universal solution for it. Instead, a multilayered approach is required, combining various techniques and methods. Promising approaches include digital watermarks, cryptographic signatures, and blockchain-based verification methods. The US government is even considering obliging tech companies to label AI-generated content.

However, crucial is the awareness and training of employees. Companies should implement robust training programs to educate the workforce about the dangers of deepfakes and train them in detection.

Measures for deepfake attacks

In the case of malicious deepfake attacks, it is important to report them to the relevant authorities. In the USA, these include the NSA Cybersecurity Collaboration Center, the FBI, and the Cybersecurity and Infrastructure Security Agency (CISA). Meanwhile, there are no specific legal regulations for deepfakes in Germany.

It is all the more important for companies in this country to focus on effective prevention. In Germany, attacked parties can report to the Federal Office for Information Security (BSI) or one of the service providers in the BSI's Cyber Security Network (CSN) through their security provider.

Unadulterated look ahead

With the rapid advancement of AI technology, the sophistication of deepfakes will also increase. However, the technology for detecting deepfakes is currently lagging far behind the production speed of new AI software and systems. It is essential for product designers, engineers, and executives to work closely together to define cryptographically secure standards for the authenticity validation of digital content.

Until then, companies can only consistently implement the available strategies for risk minimization and remain vigilant. In today's digital world, the next attack could already have a deceptively real face.

*Ulf Baltin works as Managing Director at Blackberry DACH | Central Europe.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent