Executives don’t normally encourage more regulation of their industries. But ChatGPT and its ilk are so powerful—and their impact on society will be so profound—that regulators need to get involved now.
That’s according Mira Murati, chief technology officer at OpenAI, the venture behind ChatGPT.
“We’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies—definitely regulators and governments and everyone else,” Murati said in a Time interview published Sunday.
ChatGPT is an example of “generative A.I.,” which refers to tools that can, among other things, deliver answers, images, or even music within seconds based on simple text prompts. But ChatGPT will also be used for A.I.-infused cyberattacks, researchers at Blackberry warned this week.
To offer such tools, A.I. ventures need the cloud computing resources that only a handful of tech giants can provide, so they are striking lucrative partnerships with the likes of Microsoft, Google, and Amazon. Aside from raising antitrust concerns, such arrangements make it more likely generative A.I. tools will reach large audiences quickly—perhaps faster than society is ready for.
“We weren’t anticipating this level of excitement from putting our child in the world,” Murati told Time, referring to ChatGPT. “We, in fact, even had some trepidation about putting it out there.”
Yet since its release in late November, ChatGPT has reached 100 million monthly active users faster than either TikTok or Instagram, UBS analysts noted this week. “In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” they added.
Meanwhile Google, under pressure from Microsoft’s tie-up with OpenAI, is accelerating its efforts to get more such A.I. tools to consumers. On Friday, Google announced a $300 million investment in Anthropic, which has developed a ChatGPT rival named Claude.
Anthropic, in turn, was launched largely by former OpenAI employees worried about business interests overtaking A.I safety concerns at the ChatGPT developer.
Artificial intelligence “can be misused, or it can be used by bad actors,” Murati told Time. “So, then there are questions about how you govern the use of this technology globally. How do you govern the use of A.I. in a way that’s aligned with human values?”
Elon Musk helped start OpenAI in 2015 as a nonprofit, which it no longer is. The Tesla CEO has warned about the threat that advanced A.I. poses to humanity, and in December he called ChatGPT “scary good,” adding, “We are not far from dangerously strong AI.” He tweeted in 2020 that his confidence in OpenAI’s safety was “not high,” noting that it started as open-source and nonprofit and that “neither are still true.”
Microsoft co-founder Bill Gates recently said, “A.I. is going to be debated as the hottest topic of 2023. And you know what? That’s appropriate. This is every bit as important as the PC, as the internet.”
Billionaire entrepreneur Mark Cuban said last month, “Just imagine what GPT 10 is going to look like. He added that generative A.I. is “the real deal” but “we are just in its infancy.”
Asked if it’s too early for regulators to get involved, Murati told Time, “It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
In a recent interview with the Financial Times, Greg Brockman, the Chief Technology Officer of OpenAI, a leading artificial-intelligence company, warned that while AI has the potential to revolutionize many aspects of our lives, it must be regulated to prevent it from being used by bad actors.
In the interview, Brockman pointed to OpenAI’s work on their chatbot system called ChatGPT. ChatGPT is a self-learning system that is capable of understanding natural language and responding in ways that humans understand. The system has already been used by businesses to create customer service bots and automated assistants.
However, Brockman noted that the same technology could be used for nefarious purposes such as spreading misinformation or committing fraud. To protect against these potential risks, Brockman argued that AI must be regulated by governments and international organizations in order to ensure it is only used for beneficial purposes.
He added that regulation should focus on making sure the technology is developed in ways that are respectful of the rights of individuals and that users have control over their data. Additionally, Brockman suggested that companies should be held responsible for any misuse of AI or harm that it causes.
In conclusion, Brockman’s words should be taken seriously by both industry leaders and governments. AI is a technology that has immense potential to improve many aspects of life, but it must not be allowed to be used by bad actors without proper regulation. Along with strong regulatory measures, it is also essential that companies take responsibility for any harms caused by their technology. If these measures are put in place, we can benefit from the breakthroughs of AI without fear.