Former OpenAI researcher warns that the company is recklessly racing to build AGI (Artificial General Intelligence) and calls for better whistleblower protections. OpenAI CEO Sam Altman has consistently emphasized his commitment to developing AGI, regardless of the costs involved. Recently, a former OpenAI employee spoke out about the company’s aggressive pursuit of AGI, despite the potential risks.
According to a report in the New York Times, a group of current and former OpenAI employees has expressed concerns about the company’s secretive and profit-driven culture. This whistleblowing group, consisting of nine individuals, claims that OpenAI prioritizes growth over safety, potentially jeopardizing the safe development of AI systems.
Daniel Kokotajlo, a former researcher in OpenAI’s governance division and a member of the whistleblowing group, stated, “OpenAI is really excited about building AGI, and they are recklessly racing to be the first there.” The group has signed an open letter highlighting the unintended consequences of developing AGI and advocating for stronger whistleblower protections.
The letter argues that current whistleblower protections are inadequate, as they mainly address illegal activities, while many of the risks associated with AI are not yet regulated. The group also alleges that OpenAI, originally founded as a nonprofit research lab, has shifted its focus to revenue and expansion, using aggressive tactics to silence dissent.
In a recent interaction with Stanford University students, Altman reiterated his willingness to invest heavily in AGI development, regardless of the costs, as long as it creates substantial societal value. This comes amid concerns that ChatGPT, OpenAI’s chatbot, is nearing AGI capabilities.
Meanwhile, OpenAI has launched GPT-4, an advanced version of ChatGPT, capable of analyzing images and solving complex problems.