LinkedIn Halts AI Knowledge Processing in UK Amid Privateness Considerations Raised by ICO

Sep 21, 2024Ravie LakshmananPrivateness / Synthetic Intelligence

The U.Okay. Info Commissioner’s Workplace (ICO) has confirmed that skilled social networking platform LinkedIn has suspended processing customers’ knowledge within the nation to coach its synthetic intelligence (AI) fashions.

“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users,” Stephen Almond, government director of regulatory danger, mentioned.

“We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”

Almond additionally mentioned the ICO intends to intently control corporations that supply generative AI capabilities, together with Microsoft and LinkedIn, to make sure that they’ve enough safeguards in place and take steps to guard the knowledge rights of U.Okay. customers.

Cybersecurity

The event comes after the Microsoft-owned firm admitted to coaching its personal AI on customers’ knowledge with out in search of their specific consent as a part of an up to date privateness coverage that went into impact on September 18, 2024, 404 Media reported.

“At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice,” Linked mentioned.

The corporate additionally famous in a separate FAQ that it seeks to “minimize personal data in the data sets used to train the models, including by using privacy enhancing technologies to redact or remove personal data from the training dataset.”

Customers who reside outdoors Europe can decide out of the follow by heading to the “Data privacy” part in account settings and turning off the “Data for Generative AI Improvement” setting.

“Opting out means that LinkedIn and its affiliates won’t use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place,” LinkedIn famous.

LinkedIn’s choice to quietly decide in all customers for coaching its AI fashions comes solely days after Meta acknowledged that it has scraped non-private consumer knowledge for related functions going way back to 2007. The social media firm has since resumed coaching on U.Okay. customers’ knowledge.

Final August, Zoom deserted its plans to make use of buyer content material for AI mannequin coaching after considerations had been raised over how that knowledge may very well be utilized in response to adjustments within the app’s phrases of service.

The newest growth underscores the rising scrutiny of AI, particularly surrounding how people’ knowledge and content material may very well be used to coach massive AI language fashions.

Cybersecurity

It additionally comes because the U.S. Federal Commerce Fee (FTC) revealed a report that basically mentioned massive social media and video streaming platforms have engaged in huge surveillance of customers with lax privateness controls and insufficient safeguards for youths and teenagers.

The customers’ private info is then typically mixed with knowledge gleaned from synthetic intelligence, monitoring pixels, and third-party knowledge brokers to create extra full shopper profiles earlier than being monetized by promoting it to different prepared consumers.

“The companies collected and could indefinitely retain troves of data, including information from data brokers, and about both users and non-users of their platforms,” the FTC mentioned, including their knowledge assortment, minimization, and retention practices had been “woefully inadequate.”

“Many companies engaged in broad data sharing that raises serious concerns regarding the adequacy of the companies’ data handling controls and oversight. Some companies did not delete all user data in response to user deletion requests.”

Discovered this text attention-grabbing? Comply with us on Twitter and LinkedIn to learn extra unique content material we submit.

Recent articles

INTERPOL Pushes for

Dec 18, 2024Ravie LakshmananCyber Fraud / Social engineering INTERPOL is...

Patch Alert: Essential Apache Struts Flaw Discovered, Exploitation Makes an attempt Detected

Dec 18, 2024Ravie LakshmananCyber Assault / Vulnerability Risk actors are...