The man whose company developed one of the most controversial generative artificial intelligence (AI) apps has called upon the US government the regulation such technologies.
Sam Altman, chief executive of ChatGPT developer OpenAI, testified at the Senate Judiciary Committee hearing titled Oversight of AI: Rules for Artificial Intelligence on Tuesday, May 16th.
In his testimony, Altman advised lawmakers that things could go seriously wrong with generative AI if the process of its development and implementation regulations are not set in place. The head of OpenAI added that he and other experts in the field were willing to work with the government to develop such guidelines.
Along with Altman, other industry professionals who testified to the Committee at the hearing included Christina Montgomery, privacy and trust officer at IBM, and Professor Emeritus Gary Marcus of New York University.
What Needs to be Done
According to Altman, the users of ChatGPT and similar applications should have the ability to opt out of their data being used by developers at the very least. He added that users need to know whether they’re interacting with an AI chatbot or a human.
The measures Altman suggested actually aren’t new. A number of the regulations he proposed against what is referred to as AI profiling are already being implemented on both state and local levels in California, Colorado, Connecticut, New York, and Virginia.
AI profiling refers to the collection and/or sharing of personal data which may be in the form of work background, medical records, or even financial statements. It also involves reliance on AI tools to evaluate the given information to make decisions that may have legal repercussions.
Current AI profiling legislation currently requires businesses to use AI tools to offer opt-outs for customers. Such companies should also undergo risk assessment regarding their use of AI, and advise their customers about the risks and benefits involved in interacting with AI.
It is expected that, by the end of this year, the aforementioned states with AI legislation will be joined by Illinois and Maryland.
However, implementing these regulations on a federal level will pose numerous challenges; in which case, the national government needs to provide safeguards for users.
As suggested by NYU’s Marcus, a nationwide licensing framework needs to be put in place, along with an agency in charge of enforcing it to reduce any harmful fallout caused by the misuse of AI. Such safeguards need to be set in place before full versions of AI applications are rolled out to the public.