OpenAI plans to secure additional financial backing from its major investor, Microsoft, as CEO Sam Altman moves forward with the vision of creating artificial general intelligence (AGI). Altman mentioned in an interview that the partnership with Microsoft’s CEO Satya Nadella is working well, and he anticipates raising more funds from Microsoft and other investors to support the substantial costs associated with developing more advanced AI models.
Microsoft invested $10 billion in OpenAI earlier this year as part of a multiyear agreement, valuing the company at $29 billion. Altman expressed hope that Microsoft would continue investing further, emphasizing the significant computational requirements in the journey toward AGI.
While Altman mentioned good revenue growth for the year, OpenAI remains unprofitable due to high training costs. However, he highlighted that the partnership with Microsoft ensures mutual success, with both parties benefiting from each other’s achievements.
OpenAI’s latest efforts to build a business model around ChatGPT include introducing new tools and upgrades to its GPT-4 model for developers and companies. The company announced custom versions of ChatGPT for specific applications and a GPT Store, similar to Apple’s App Store, where popular GPT creators can share revenues.
Altman emphasized that OpenAI’s primary product is intelligence, and the various channels, including research labs, APIs, partnerships, and products like ChatGPT, serve as avenues to deliver that intelligence. To bolster the enterprise business, OpenAI has brought in executives like Brad Lightcap, former employee at Dropbox and startup accelerator Y Combinator, as its chief operating officer. OpenAI plans to secure
Altman, meanwhile, splits his time between two areas: research into “how to build superintelligence” and ways to build up computing power to do so. “The vision is to make AGI, figure out how to make it safe . . . and figure out the benefits,” he said. Pointing to the launch of GPTs, he said OpenAI was working to build more autonomous agents that can perform tasks and actions, such as executing code, making payments, sending emails or filing claims. “We will make these agents more and more powerful . . . and the actions will get more and more complex from here,” he said. “The amount of business value that will come from being able to do that in every category, I think, is pretty good.”
The company is also working on GPT-5, the next generation of its AI model, Altman said, although he did not commit to a timeline for its release. It will require more data to train on, which Altman said would come from a combination of publicly available data sets on the internet, as well as proprietary data from companies. OpenAI recently put out a call for large-scale data sets from organisations that “are not already easily accessible online to the public today”, particularly for long-form writing or conversations in any format. While GPT-5 is likely to be more sophisticated than its predecessors, Altman said it was technically hard to predict exactly what new capabilities and skills the model might have. “Until we go train that model, it’s like a fun guessing game for us,” he said.
“We’re trying to get better at it, because I think it’s important from a safety perspective to predict the capabilities. But I can’t tell you here’s exactly what it’s going to do that GPT-4 didn’t.” To train its models, OpenAI, like most other large AI companies, uses Nvidia’s advanced H100 chips, which became Silicon Valley’s hottest commodity over the past year as rival tech companies raced to secure the crucial semiconductors needed to build AI systems. Altman said there had been “a brutal crunch” all year due to supply shortages of Nvidia’s $40,000-a-piece chips. He said his company had received H100s, and was expecting more soon, adding that “next year looks already like it’s going to be better”.
