Meta has confirmed that it will pause plans to start training its AI systems using data from its users in the European Union and U.K.
The move follows pushback from the Irish Data Protection Commission (DPC), Meta’s lead regulator in the EU, which is acting on behalf of several data protection authorities across the bloc. The U.K.’s Information Commissioner’s Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised.
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said in a statement Friday. “This decision followed intensive engagement between the DPC and Meta. The DPC, in cooperation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
While Meta is already tapping user-generated content to train its AI in markets such as the U.S, Europe’s stringent GDPR regulations has created obstacles for Meta — and other companies — looking to improve their AI systems including large language models with user-generated training material.
However, Meta began notifying users of an upcoming change to its privacy policy last month, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos and their associated captions. The company argued that it needed to do this to reflect “the diverse languages, geography and cultural references of the people in Europe.”
These changes were due to come into effect on June 26, 2024 — 12 days from now. But the plans spurred not-for-profit privacy activist organization NOYB (“none of your business”) to file 11 complaints with constituent EU countries, arguing that Meta is contravening various facets of GDPR. One of those relates to the issue of opt-in versus opt-out, vis à vis where personal data processing does take place, users should be asked their permission first rather than requiring action to refuse.
Meta, for its part, was relying on a GDRP provision called “legitimate interests” to contend that its actions were compliant with the regulations. This isn’t the first time Meta has used this legal basis in defence, having previously done so to justify processing European users’ for targeted advertising.
It always seemed likely that regulators would at least put a stay of execution on Meta’s planned changes, particularly given how difficult the company had made it for users to “opt out” of having their data used. The company said that it sent out more than 2 billion notifications informing users of the upcoming changes, but unlike other important public messaging that are plastered to the top of users’ feeds, such as prompts to go out and vote, these notifications appeared alongside users’ standard notifications — friends’ birthdays, photo tag alerts, group announcements, and more. So if someone doesn’t regularly check their notifications, it was all too easy to miss this.
And those who did see the notification wouldn’t automatically know that there was a way to object or opt-out, as it simply invited users to click through to find out how Meta will use their information. There was nothing to suggest that there was a choice here.
Meta: AI notification Image Credits: Meta
Moreover, users technically weren’t able to “opt out” of having their data used. Instead, they had to complete an objection form where they put forward their arguments for why they didn’t want their data to be processed — it was entirely at Meta’s discretion as to whether this request was honored, though the company said it would honor each request.
Facebook “objection” form Image Credits: Meta / Screenshot
Although the objection form was linked from the notification itself, anyone proactively looking for the objection form in their account settings had their work cut out.
On Facebook’s website, they had to first click their profile photo at the top-right; hit settings & privacy; tap privacy center; scroll down and click on the Generative AI at Meta section; scroll down again past a bunch of links to a section titled more resources. The first link under this section is called “How Meta uses information for Generative AI models,” and they needed to read through some 1,100 words before getting to a discrete link to the company’s “right to object” form. It was a similar story in the Facebook mobile app too.
Link to “right to object” form Image Credits: Meta / Screenshot
Earlier this week, when asked why this process required the user to file an objection rather than opt-in, Meta’s policy communications manager Matt Pollard pointed TechCrunch to its existing blog post, which says: “We believe this legal basis [“legitimate interests”] is the most appropriate balance for processing public data at the scale necessary to train AI models, while respecting people’s rights.”
To translate this, making this opt-in likely wouldn’t generate enough “scale” in terms of people willing to offer their data. So the best way around this was to issue a solitary notification in amongst users’ other notifications; hide the objection form behind half-a-dozen clicks for those seeking the “opt-out” independently; and then make them justify their objection, rather than give them a straight opt-out.
In an updated blog post today, Meta’s global engagement director for privacy policy Stefano Fratta said that it was “disappointed” by the request it has received from the DPC.
“This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Fratta wrote. “We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.”
AI arms race
None of this new of course, and Meta is in an AI arms race that has shone a giant spotlight on the vast arsenal of data Big Tech holds on all of us.
Earlier this year, Reddit revealed that it’s contracted to make north of $200 million in the coming years for licensing its data to companies such as ChatGPT-maker OpenAI and Google. And the latter of those companies is already facing huge fines for leaning on copyrighted news content to train its generative AI models.
But these efforts also highlight the lengths to which companies will go to to ensure that they can leverage this data within the constrains of existing legislation — “opting in” is rarely on the agenda, and the process of opting out is often needlessly arduous. Just last month, someone spotted some dubious wording in an existing Slack privacy policy that suggested it would be able to leverage user data for training its AI systems, with users able to opt out only by emailing the company.
And last year, Google finally gave online publishers a way to opt their websites out of training its models by enabling them to inject a piece of code into their sites. OpenAI, for its part, is building a dedicated tool to allow content creators to opt out of training its generative AI smarts — this should be ready by 2025.
While Meta’s attempts to train its AI on users’ public content in Europe is on ice for now, it likely will rear its head again in another form after consultation with the DPC and ICO — hopefully with a different user-permission process in tow.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset,” Stephen Almond, the ICO’s executive director for regulatory risk, said in a statement today. “We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”