In light of the recent launch of its new AI-driven virtual assistant, social media technology firm Meta Platforms was quick to allay fears that its developers used private user data for its training.

Nick Clegg, Meta’s president of global affairs, addressed any issues during the assistant’s launch at Connect, the company’s annual conference, on Wednesday, September 27th. In his statement, Clegg explained that only public posts on Facebook and Instagram were used for training the new AI tool, excluding datasets that were predominantly made up of personal information. 

That said, any private posts shared with a user’s friends and family were not included as part of the training module, nor were any private chats conducted on either platform’s direct-messaging services.

In Tune with the Times

The new Meta AI Assistant was the most significant launch during this year’s Connect, and is the first among a set of consumer-facing AI applications presented by company chief exec Mark Zuckerberg. 

This, in itself, appeared to be on trend as discussions during the conference veered away from the augmented and virtual reality that characterized previous events, and centered mostly on the evolution and use of artificial intelligence.

A Timely Statement

Clegg’s statements regarding the Meta AI assistant are timely, especially now that the company as well as its peers OpenAI and Google parent firm Alphabet have come under fire for using information gleaned from the internet without users’ consent to train AI models.

Given how training AI involves an enormous inflow of information to enable it to make summaries and generate images, these companies have faced lawsuits in  recent months from authors, artists, and independent content developers who feel that the tech titans have overstepped boundaries by ignoring individual copyrights. 

In which case, these firms have been hard-pressed regarding how they handle any privately-shared or copyrighted materials that may have been siphoned in during the process of AI training.