The use of artificial intelligence (AI) in various sectors of society has been a source of concern for governments throughout the world who feel that, in the wrong hands, such technologies would do more harm than good. However, any moves to regulate the use of AI have been complicated by the way innovations in the sector have been introduced at such a rapid rate.
For now, only China has implemented a set of rules regulating the sector, but it is understood that these regulations remain temporary until a more lasting set can be devised.
During the opening session of the AI Safety Summit in the United Kingdom last November 1st, Chinese vice-minister for science and technology Wu Zhaohui advised his peers that his government has published several proposed security requirements for any company that offers generative AI-driven services to the public. These, along with a blacklist pointing out sources that may not be used for training AI models, were published last October.
The country has also required AI service providers to comply with security assessments and secure government clearances before the launch of any AI solutions to the public.
A Conundrum for the Host
Meanwhile, the nation that hosted the AI Safety Summit announced that it would triple funding for its own AI Research Resource. Around $364 million will go into research for making advanced AI models safer and less of a security risk. At present, the resource consists of two supercomputers; any output is expected to lead to the foundation of the first-ever AI safety institute.
The British government, however, remains in the planning stage when it comes to implementable regulations. However, its own data watch system slapped an initial enforcement notice onto the social media platform Snapchat back in October. This was due to the platform’s failure to assess the risks its generative AI chatbot posed to users, especially minors.
Who Else Is Working Towards Regulation?
At this point, other countries in the process of preparing regulations for the sector include Australia, the European Union, and Japan.
Meanwhile, France, Italy, Poland, and Spain are all investigating recent breaches on the part of OpenAI and ChatGPT which violate their existing data privacy laws. The United States, on the other hand, is currently in consultation with experts for their regulatory measures.