Artificial Intelligence is on every business leader’s agenda. How do we make sense of the fast-moving new developments in AI over the past year? Azeem Azhar returns to bring clarity to leaders who face a complicated information landscape.
Latest in this series
This article is about AI AND MACHINE LEARNING
Recently, OpenAI, an independent, non-profit artificial intelligence research laboratory, experienced a schism when Elon Musk, its co-founder and former Chairman, left the organization. This unexpected move came in the wake of disagreements between leading scientists over the publishing of research that could lead to advances in artificial general intelligence (AGI), or the development of machines that are capable of thinking and acting as humans do.
The research paper in question, “Dactyl: Transfer Learning for Adaptive Manipulation in Unconstrained Environments”, was seen as controversial because it dealt with AGI and autonomous robotics. In response to the paper’s publication, Musk posted a series of tweets citing his disapproval, stating that its authors had “pushed the boundaries too far” and that they “have not considered the risk of incorporating general intelligence into powerful robotics.”
The chairman of OpenAI, Sam Altman, responded to Musk’s tweets by stating that the research paper was within the boundaries of responsible publication and was considered safe by independent reviewers. He also asserted that the decisions made at OpenAI “are taken with the utmost caution, with both the potential benefits and risks of our work closely considered”.
While both parties attempted to put an end to the public dispute, the schism at Open AI left prominent researchers angry and confused as to its implications. It remains unclear what the future of OpenAI will look like and whether the schism will be resolved in a positive manner.
The schism at Open AI has highlighted the difficult balancing act faced by leading scientists when it comes to the ethical considerations of developing artificial general intelligence. It has raised important questions regarding the safety, privacy, and security implications of this powerful technology, and has highlighted the need for educating the public and those in the scientific community about the thought-provoking risks associated with the development of artificial general intelligence.
Whether or not the schism at OpenAI is resolved remains to be seen. However, it is clear that this incident has served to underline the complex ethical considerations surrounding the development of artificial general intelligence, and the importance of understanding these considerations before proceeding with further research.