CAN PRIVATE companies pushing the boundaries of revolutionary fresh technology be expected to act in the interests of both their shareholders and the world? When we were hired to serve on OpenAI’s board – Tasha in 2018 and Helen in 2021 – we were cautiously bullish that the company’s groundbreaking approach to self-governance could serve as a model for responsible AI development. However, in our experience, self-government cannot reliably counter the pressures of profit incentives. Given the enormous potential of AI to have both positive and negative impact, it is not enough to assume that such incentives will always be consistent with the public good. For the development of AI to benefit everyone, governments must start creating an effective regulatory framework now.
If any company could effectively manage itself while safely and ethically developing advanced artificial intelligence systems, it would be OpenAI. The organization was originally founded as a nonprofit with a laudable mission: to ensure that AGI, or artificial general intelligence — artificial intelligence systems that are generally smarter than humans — benefit “all of humanity.” A for-profit subsidiary was later established. formed to raise the necessary capital, but the nonprofit remained at the helm. The stated purpose of this unusual structure was to protect the company’s ability to continue its original mission, and it was the board’s responsibility to sustain that mission. but it seemed worth a try. Unfortunately it didn’t work.
In an attempt to save this self-regulatory structure last November, OpenAI’s board fired its CEO, Sam Altman. The board’s ability to fulfill the company’s mission was increasingly impaired by Mr. Altman’s persistent pattern of behavior, which, among other things, we believe undermined the board’s oversight of key decisions and internal security protocols. Many senior leaders have privately shared stern concerns with the board, saying they believe Mr. Altman is cultivating a “toxic culture of lies” and engaging in “conduct [that] can be described as psychological abuse.” According to OpenAI, an internal investigation found that the board “acted within its broad discretion” to fire Mr. Altman, but also found that his conduct did not “warrant removal.” OpenAI provided some supporting details. reached this conclusion and did not share the investigation report with employees, the press or the public.
The question of whether such behavior should essentially “order the removal” of a CEO is a discussion for another time. However, in the specific case of OpenAI, given the board’s duty to provide independent oversight and protect the company’s public interest mission, we maintain the board’s position. We also believe that the developments that have occurred since his return to the company – including his reinstatement to the board and the departure of talented senior security staff – bodes ill for OpenAI’s experiment in self-governance.
Our particular story offers a broader lesson that society cannot allow the implementation of artificial intelligence to be controlled solely by private technology companies. There are certainly many real efforts being made in the private sector to responsibly guide the development of this technology, and we applaud those efforts. However, even with the best intentions and without external oversight, this type of self-regulation will prove impossible to enforce, especially under the pressure of enormous profit incentives. Governments must play an dynamic role.
And yet in recent months, a growing chorus of voices – from Washington lawmakers to Silicon Valley investors – have advocated for minimal government regulation of artificial intelligence. They often compare it to the laissez-faire approach to the Internet in the 1990s and the economic growth it spurred. However, this analogy is misleading.
There is widespread recognition within AI companies and the broader AI research and engineering community of the high stakes – and high risks – of developing increasingly advanced AI. In Altman’s own words: “Successfully transitioning to a world with superintelligence is perhaps the most crucial – hopeful and terrifying – project in human history.” The level of concern expressed by many leading artificial intelligence scientists about the technology they are building is well documented and very different from the bullish approach of the programmers and network engineers who developed the early Internet.
It is also unclear whether lithe regulation of the Internet has been a pure good for society. Certainly, many successful technology companies – and their investors – have benefited greatly from the lack of restrictions on online trading. It is less clear that societies have struck the right balance when it comes to regulation to curb misinformation and misinformation on social media, child exploitation and human trafficking, and the growing youth mental health crisis.
Regulation improves goods, infrastructure and society. It is thanks to the regulations that cars are equipped with seat belts and airbags, we do not worry about contaminated milk, and buildings are built to be accessible to everyone. Sensible regulation could ensure responsible and broader exploit of the benefits of artificial intelligence. A good starting point would be policies that provide governments with greater visibility into state-of-the-art AI developments, such as transparency requirements and incident tracking.
Of course, regulations have pitfalls and need to be managed. Poorly designed regulations can place a disproportionate burden on smaller businesses, stifling competition and innovation. It is crucial that policymakers act independently of leading AI companies when developing fresh regulations. They must remain vigilant against regulatory loopholes, regulatory moats that protect early movers from competition, and the potential for regulatory capture. Indeed, Mr. Altman’s calls for AI regulation should be understood in the context of these pitfalls as having potentially self-serving goals. An appropriate regulatory framework will require elastic adjustments that keep pace with the world’s growing understanding of AI’s capabilities.
Ultimately, we believe in the potential of artificial intelligence to enhance human productivity and well-being in ways never before seen. However, the path to a better future is not without dangers. OpenAI was created as a bold experiment to develop increasingly proficient artificial intelligence while prioritizing public good over profits. In our experience, even with every advantage, self-governance mechanisms like those used by OpenAI are not enough. It is therefore crucial that the public sector is closely involved in technology development. It’s time for government bodies around the world to express their opinion. Only through a robust balance of market forces and prudent regulation can we reliably ensure that the evolution of artificial intelligence truly benefits all of humanity.
Helen Toner and Tasha McCauley served on the OpenAI board from 2021 to 2023 and 2018 to 2023, respectively.
© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com