Meta Platforms received 11 complaints on June 5 regarding proposed changes in their use of personal data for training artificial intelligence (AI) models without proper consent, potentially breaching European Union privacy laws.
The privacy advocacy group None of Your Business (NYOB) urged national privacy watchdogs to take immediate action to halt Meta’s planned modifications. These complaints were filed in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain.
The complaints alleged that Meta’s updated privacy policy, effective June 26, would enable the company to utilize personal posts, private images, and online tracking data spanning years for its AI technology. In response to these impending changes, NOYB requested data protection authorities in the 11 countries to conduct an urgent review.
According to a statement from NYOB, Meta’s revised privacy policy justifies the use of user data to train generative AI models and other AI tools, which can be shared with third parties, under the guise of legitimate interest. This policy change affects millions of European users, preventing them from removing their data once it enters the system.
NOYB has previously lodged complaints against Meta and other major tech firms for alleged infringements of the EU’s General Data Protection Regulation (GDPR), which imposes fines of up to 4% of a company’s global turnover for violations.
Max Schrems, NOYB’s founder, highlighted that the European Court of Justice had issued a significant ruling on this matter in 2021, providing guidance on Meta’s proposed use of personal data. Schrems argued that users should not bear the burden of safeguarding their privacy and insisted that Meta must obtain explicit consent from users rather than offering a concealed opt-out option.
He stressed that Meta should directly seek permission from users if they wish to use their data, rather than requiring users to request exclusion from data usage, which he deemed inappropriate. In a similar vein, Google faced a lawsuit in July 2023 for allegedly misusing vast amounts of data, including copyrighted material, in its AI training.
Magazine: Make meth, napalm with ChatGPT, AI bubble, 50M deepfake calls: AI Eye