OpenAI CEO Sam Altman recently shared that the company believes it now understands how to develop artificial general intelligence (AGI) as traditionally defined. In a recent blog post, Altman further revealed the company’s growing focus on ‘superintelligence,’ a step beyond AGI.
“We love our current products, but we are here for the glorious future,” Altman wrote. He emphasised, “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn, massively increase abundance and prosperity.”
Altman has previously predicted that superintelligence could arrive within a ‘few thousand days’ and described its impact as being more intense than many anticipate. OpenAI defines AGI as highly autonomous systems capable of outperforming humans in most economically valuable tasks. However, Altman did not explicitly specify whether he was referring to this definition or Microsoft’s, which considers that AGI can generate $100 billion in profits — a milestone that would end Microsoft’s access to OpenAI technology under their agreement.
Altman also suggested that AI agents may soon ‘join the workforce’ and significantly influence companies’ output. He believes that putting great tools in the hands of people leads to ‘broadly distributed benefits.’
Despite these ambitions, current AI technology faces hurdles such as errors, hallucinations, and high operational costs. While Altman remains optimistic that these challenges will be resolved quickly, recent years have demonstrated that AI development timelines can be unpredictable.
OpenAI has acknowledged the complexity of transitioning to a world with superintelligence, admitting in past blog posts that the process is uncertain. The company has expressed concerns about safely steering superintelligent systems, stating in 2023 that it lacked solutions to control such AI reliably. Its post noted that humans won’t be able to supervise AI systems that are smarter and current alignment techniques won’t scale.
Since then, OpenAI has disbanded some of its AI safety teams and seen several researchers leave, with many citing the company’s growing commercial focus as a concern. The company is also undergoing a corporate restructuring to attract more investors.
As the company moves toward superintelligence, ensuring the safety and alignment of these systems remains a critical and unresolved challenge.