Sam Altman, the CEO of OpenAI, testified before US lawmakers on Tuesday, emphasizing the need to regulate artificial intelligence (AI) following the success of his company’s poem-writing chatbot. During the hearing on Capitol Hill, Senator Richard Blumenthal used a computer-generated voice, similar to his own, to read a text written by the chatbot, highlighting the lawmakers’ concerns about AI’s advancements.
Altman’s testimony, unlike the confrontational grillings faced by executives from Facebook and TikTok in the past, aimed to educate lawmakers and advocate for new regulations on big tech. Despite political divisions that have hindered internet regulation legislation, Altman urged Congress to impose rules on AI, stating that the technology can have significant negative consequences if mishandled.
The recent release of ChatGPT, an AI bot capable of generating human-like content, has garnered attention worldwide and raised questions about the potential benefits and risks of AI. Altman, who has become a prominent figure in the AI field, both promotes his company’s technology, which is used by Microsoft and other firms, and warns about the potential negative impacts on society.
Altman emphasized that OpenAI recognizes the risks associated with AI and believes that regulatory intervention by governments is crucial to mitigate those risks. He suggested that the US government could implement a combination of licensing and testing requirements for powerful AI models, with the ability to revoke permits for rule violations. He also recommended labeling AI outputs and fostering global coordination to establish rules for the technology. Additionally, Altman proposed the creation of a dedicated US agency to handle AI-related matters.
While Altman advocated for the US to take a leading role in AI regulation, he acknowledged the importance of global cooperation. Senator Blumenthal pointed out the progress made by Europe with its AI Act, which includes potential bans on biometric surveillance, emotion recognition, and certain policing AI systems. The Act also seeks special transparency measures for generative AI systems like ChatGPT and DALL-E, notifying users that the output was generated by a machine.
During the hearing, it was acknowledged that AI technology is still in its early stages. Professor emeritus Gary Marcus from New York University warned that there are more advancements to come but noted the absence of machines capable of self-improvement or self-awareness, cautioning against pursuing such capabilities. Christina Montgomery, the chief privacy and trust officer at IBM, urged lawmakers to consider the varying impacts of AI systems and avoid overly broad strokes when setting up AI regulations.
Sam Altman’s testimony underscored the importance of regulating AI to address its potential risks while harnessing its benefits. The hearing highlighted the need for government intervention, transparency measures, and global coordination in establishing rules for AI technologies.