Meta

Building AI Technology for Europeans in a Transparent and Responsible Way

By Stefano Fratta, Global Engagement Director, Meta Privacy Policy

Takeaways

  • We are working hard to build cutting edge AI technology for Europeans that reflects their languages, geography and cultural references in the same way as other regions in the world.
  • We are following the example set by others, including Google and OpenAI, both of which have already used data from Europeans to train AI. Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information.
  • Models are built by looking at people’s information to identify patterns, like understanding colloquial phrases or local references, not to identify a specific person or their information.
  • We’re not using people’s private messages with friends and family to train our AI systems, nor do we use content from accounts of Europeans under age 18.
  • We’re using content that people have chosen to make public to build our foundational AI model that we release openly. 

Update on June 14, 2024 at 7:30am PT:

We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram  — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March. This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.

We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.

We are committed to bringing Meta AI, along with the models that power it, to more people around the world, including in Europe. But, put simply, without including local information we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.  

We will continue to work collaboratively with the DPC so that people in Europe have access to – and are properly served by – the same level of AI innovation as the rest of the world.

This delay will also enable us to address specific requests we have received from the Information Commissioner’s Office (ICO), our UK regulator, ahead of starting the training.

 

Originally published on June 10, 2024 at 6am PT:

For years, we’ve been working hard to build the next generation of AI features across our family of apps and devices. And this year, we plan to expand AI at Meta – our collection of generative AI features and experiences along with the models that power them – to people in Europe.

AI at Meta is already available in other parts of the world. This includes Llama – our state-of-the-art open source large language model – and the Meta AI assistant – the most intelligent AI assistant you can use for free. To properly serve our European communities, the models that power AI at Meta need to be trained on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe who will use them. To do this, we want to train our large language models that power AI features using the content that people in the EU have chosen to share publicly on Meta’s products and services.

If we don’t train our models on the public content that Europeans share on our services and others, such as public posts or comments, then models and the AI features they power won’t accurately understand important regional languages, cultures or trending topics on social media. We believe that Europeans will be ill-served by AI models that are not informed by Europe’s rich cultural, social and historical contributions.

Meta is not the first company to do this – we are following the example set by others, including Google and OpenAI, both of which have already used data from European users to train AI. Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information.

We Are Committed to Developing AI Responsibly and Transparently

Building this technology comes with the responsibility to develop best practices and policies that adhere to local laws and regulations. In line with this commitment, we are in consultation with our lead privacy regulator in the EU, the Irish Data Protection Commission, and have incorporated their feedback to date to ensure that the way we train AI at Meta complies with EU privacy laws. We also continue to work with experts like academics and consumer advocates to ensure that what we build follows best practices.

We want to be transparent with people so they are aware of their rights and the controls available to them. That’s why, since 22 May, we’ve sent more than two billion in-app notifications and emails to people in Europe to explain what we’re doing. These notifications contain a link to an objection form that gives people the opportunity to object to their data being used in our AI modelling efforts. 

In the development of our notifications, we reviewed the approach of our industry counterparts as well as our previous policy update notifications. As a result, we made our form easier to find, read and use than those offered by other companies offering generative AI in the EU: it can be accessed with just three clicks and requires fewer fields to be completed. To further aid understanding, we also designed it to be more accessible to people with a lower reading age, even though we are not training our Llama models on content from accounts of Europeans under the age of 18. 

We are honouring all European objections. If an objection form is submitted before Llama training begins, then that person’s data won’t be used to train those models, either in the current training round or in the future.

To be clear, our goal is to build useful features based on information that people over 18 in Europe have chosen to share publicly on Meta’s products and services, such as public posts, public comments, or public photos and their captions. Models may be trained on people’s publicly shared posts, but it’s not a database of each person’s information nor is it designed to identify any individual. Rather, these models are built by looking at people’s information to identify patterns, like understanding colloquial phrases or local references, not to identify a specific person or their information. 

As we’ve said, we do not use people’s private messages with friends and family to train our AI systems. In the future, we anticipate using other content, such as interactions with AI features or chats with a business using AI at Meta AI. 

Like many industry counterparts that have come before us in training large language models using Europeans’ data, to do this work in the EU, we will rely on a legal basis of ‘Legitimate Interests’ to comply with the EU’s General Data Protection Regulation (GDPR). We believe this legal basis is the most appropriate balance for processing public data at the scale necessary to train AI models, while respecting people’s rights. 

We feel a responsibility to build AI that is not forced on Europeans but actually built for them. To do that while respecting European users’ choices, we think the right thing to do is to let them know of our plans and give them the choice to tell us if they don’t want to participate. And we believe the best way to strike this balance is for companies to be transparent about the information their AIs are using while providing users with prominent controls to opt-out of those uses if that is their preference. This is precisely what we have done.

Europe is at a Crossroads

As Europe stands at the threshold of society’s next major technological evolution, some activists are advocating extreme approaches to data and AI. Let’s be clear: those positions don’t reflect European law, and they amount to an argument that Europeans shouldn’t have access to — or be properly served by — AI that the rest of the world has. We deeply disagree with that outcome. 

As one of the most influential regions in the world, Europe has the potential to be a competitive leader in AI innovation. But questions still remain: will Europeans have equal access to groundbreaking AI? Will AI experiences reflect our culture, humour and history? Or does Europe want to watch as the rest of the world benefits from truly innovative technology that builds community and drives growth? 

AI is the next frontier in innovation. We’re living in one of the most exciting technological moments in a generation, where breakthroughs are happening in front of our eyes and the possibilities are endless. And at Meta, we want Europeans to be part of it.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy