Artificial intelligence Meta AI in the European Economic Area despite unchanged legal situation?

A guest article by Johannes Marco Holz & Frederik Kopp* | Translated by AI 3 min Reading Time

Related Vendors

As recently revealed, Facebook's parent company Meta intends to introduce AI features in its own apps within the EEA, despite unchanged EU law. Here, lawyers Johannes Marco Holz and Frederik Kopp provide an assessment of the approach.

Although the legal situation has not changed, Meta is now planning to introduce its AI features in the European Economic Area.(Image: freely licensed /  Pixabay)
Although the legal situation has not changed, Meta is now planning to introduce its AI features in the European Economic Area.
(Image: freely licensed / Pixabay)

Johannes Marco Holz, LL.M., is a specialist lawyer for information technology law and a certified corporate data protection officer (GDDcert. EU), at Rödl & Partner in the field of IT and data protection law.

Lawyer Frederik Kopp, LL.M., is a Senior Associate at Rödl & Partner. He advises medium-sized and international companies, particularly in the areas of data protection law and compliance.

Meta is now introducing AI features for their apps in the European Economic Area (EEA). This raises numerous legal questions in light of developments over the past year. Despite unchanged legal frameworks, Meta is implementing its AI-powered features in apps like Facebook, Instagram, and WhatsApp. This was recently ruled out with reference to local data protection regulations. But how does this align with the strict regulations of the General Data Protection Regulation (GDPR) and the AI Act? This article takes a critical look at the legal circumstances and potential risks for companies and users.

Unclear functional adjustments of the AI

According to Meta, the functionality of its AI has been adjusted to comply with European data protection regulations. However, which specific changes have been made remains largely unclear. Currently, there is no detailed information on how data processing and storage will be managed in Europe.

So far, Meta stores personal data in global data centers, which remains critical when considering the GDPR. Without clear documentation of the technical adjustments, using the AI remains risky—both for companies and private users.

Introduction of AI with unchanged legal situation

The surprising twist to introduce the AI application within such a short time especially raises questions. Last year, Meta CEO Mark Zuckerberg emphasized that the introduction of Meta AI was not possible due to the applicable data protection regulations. Now, the AI is still supposed to come to the EEA—and without any changes to the legal framework.

The GDPR and the AI Act continue to set legal requirements for the use of artificial intelligence and the processing of personal data. However, Meta has so far provided hardly any transparent information on why the AI application is now suddenly supposed to comply with these European regulations.

Exclusion of EU data for training purposes

A possible explanation for the change can be seen in the exclusion of EU data for the training of Meta AI. But is this sufficient to meet GDPR requirements? The legal requirements of the GDPR and the AI regulation cover not only the training phase but also the use and storage of data during AI operation. The question remains whether training data could be contained in the generated responses of the AI. If "residues" of this data appear in the AI's responses, this could lead to a violation of data protection regulations.

Even the interaction of EU users with the AI can lead to the processing of personal data. This is particularly problematic in messenger services and social networks, where communication content is processed. Without clear information on how these data are used, stored, and protected from third parties, the use of the AI remains questionable.

Data protection risks for companies through Meta apps

The use of Meta apps with AI features is particularly concerning for companies. In the past, there has been criticism that Facebook, Instagram, and WhatsApp collect user data and use it for not clearly defined purposes. The integration of the AI feature exacerbates this issue further.

Even if the AI does not directly access business-critical information, Meta applications could indirectly capture sensitive data—such as through messages, voice recordings, or shared content. Companies should therefore carefully assess whether using Meta's AI is compatible with their data protection policies and compliance requirements. Professions with high confidentiality requirements are advised to exercise particular caution.

Questions remain unanswered

The implementation of Meta's AI features in the EEA is occurring under unclear legal circumstances. The legal framework has not changed. Meta has not yet provided understandable information on what changes in the functionality of the AI are now leading to compliant use.

The mere exclusion of EU data in the AI training process alone is not sufficient to meet GDPR standards. It remains unclear how Meta handles data that might be contained in the AI-generated content. Companies and individuals should therefore continue to be cautious and keep an eye on the protection of personal data when using Meta apps.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent