OpenAI is right to abandon non-profit status

OpenAI is right to abandon non-profit status

Unlock the Editor’s Digest for free

The writer is a general partner of Air Street Capital and co-author of the State of AI report

Frontier artificial intelligence labs are unusual beasts. Unlike other businesses, a number of their founders believe their life’s work has the potential to end the world. Yet the vast sums needed to develop capable models forces them to demonstrate immediate commercial success.

OpenAI’s unconventional structure was a creative way of reconciling these tensions — on paper at least.

By combining a not-for-profit oversight board with a for-profit commercial entity that limited returns, it hoped to allow researchers concerned about existential risk to live under the same roof as their revenue-minded peers, safe in the knowledge that excessive financial returns could not override their mission to do good. 

Now, OpenAI is widely expected to dispense with its non-profit status. Investors who took part in its latest $6.6bn funding round can recover their money if the conversion does not take place.

This is unsurprising. Any founder or investor will tell you that no amount of corporate structural Jenga can glue together a team that is not aligned.

Last year, tensions between OpenAI’s non-profit board and Sam Altman, its co-founder and chief executive, led to Altman’s dramatic firing. He was rapidly rehired and the board has since been replaced. Several senior executives have left the company, including co-founder and chief scientist Ilya Sutskever.

Altman is now free to arrange the company as he pleases. Without a profit cap, it should be even easier for OpenAI to raise the funds it needs.

OpenAI was founded in 2015 as a non-profit research organisation and created a for-profit subsidiary in 2019. At the Paris Peace Forum last November, Microsoft president Brad Smith even spoke admiringly of it — comparing it to Meta, where founder Mark Zuckerberg maintains control by holding shares with extra voting rights.

“Meta is owned by shareholders. OpenAI is owned by a non-profit,” said Smith. “Which would you have more confidence in? Getting your technology from a non-profit? Or a for-profit company that is entirely controlled by one human being?”

While it might seem surprising that investors like Microsoft were willing to accept this unusual structure, it was less of a sacrifice than it appears at first sight. First, OpenAI is not yet profitable so returns remain theoretical. Second, Microsoft was set to receive 75 per cent of OpenAI’s eventual profits until it recouped its initial investment and the subsequent profit cap did not kick in until a healthy return. Microsoft’s decision to back Altman also suggests that it believed one human being has substantial sway over that company too.

Still, any structural oddity can put off investors and OpenAI’s funding needs are vast. If it now adopts the public benefit corporation model employed by AI peers such as Anthropic and xAI, investors need not be concerned that lofty goals will get in the way of profits. While this requires companies to pursue social goals, there are no exacting standards needed to demonstrate compliance.

As generative AI products begin to drive billions of dollars a year in revenue, more teams are being forced to move away from philosophical debates and decide if they’re serious about commercialisation or not.

But there is no inherent conflict between running a straightforward for-profit company and building safe, robust technology. Reckless behaviour that regulates an industry out of existence endangers profits.

Companies like Google DeepMind and Anthropic have more conventional structures than OpenAI and have built substantial AI safety teams. In this industry, complicated corporate engineering achieves very little.