Plans by Meta to use public posts and images from Facebook and Instagram to train its artificial intelligence (AI) tools have faced backlash from digital rights groups.
The social media company has recently notified UK and European users that, starting 26 June, changes to its privacy policy will allow the use of their information to “develop and improve” its AI products.
This policy includes posts, images, image captions, comments, and Stories shared publicly by users over the age of 18 on Facebook and Instagram, excluding private messages.
Noyb, a European digital rights advocacy group, has denounced this extensive processing of user content as an “abuse of personal data for AI.” The group has filed complaints with 11 data protection authorities across Europe, urging them to halt Meta’s plans immediately.
Meta maintains that its approach aligns with relevant privacy laws and mirrors the practices of other major tech companies in using data to develop AI experiences across Europe.
In a blog post, Meta stated that European user information would facilitate a broader rollout of its generative AI experiences by providing more pertinent training data. “These features and experiences need to be trained on information that reflects the diverse cultures and languages of the European communities,” the post read.
Tech companies are in a race to acquire diverse, multiformat data to enhance models powering chatbots, image generators, and other innovative AI products.
In a February earnings call, Meta CEO Mark Zuckerberg emphasised the importance of the firm’s “unique data” in its AI strategy, highlighting the vast amounts of publicly shared images, videos, and text posts at their disposal.
Meta’s chief product officer, Chris Cox, also mentioned in May that the company already uses public data from Facebook and Instagram for its generative AI products in other parts of the world.
Criticism has also been directed at how Meta communicated these data usage changes to users. Recently, Facebook and Instagram users in the UK and Europe received notifications or emails explaining the new use of their data for AI from 26 June.
The company is relying on “legitimate interests” as its legal basis for data processing, meaning users must actively opt out by exercising their “right to object” if they do not want their data used for AI.
Users can click on the hyperlinked “right to object” text in the notification, which leads to a form where they must explain how the processing would impact them.