At a time of accelerating progress in artificial intelligence (AI), integrity and ethical transparency are no longer nice extras for a leader to possess but instead are essential ingredients in guiding healthy innovation to the right place on the continuum from revolution to megalomania. We’ve erupted into months of turmoil since the stunning announcement by Sam Altman, CEO of OpenAI, that he was being put on leave after three longtime colleagues at the giant San Francisco AI lab leaked a withering exit memo. This story arc captures the important, intricate issues facing AI organisations as they try to balance the scales of progress, public good and proper internal governance.
At the core of this controversy is a collection of claims brought forward by Helen Toner, who served as an OpenAI board member: that Altman was engaging in a range of unethical behaviours including lying, obstructing, and punishing critics in an atmosphere becoming steadily more toxic. In her correspondence, Toner explained that this was just one of many instances of efforts to hasten Altman’s termination, involving a prolonged struggle over who could exercise power within the organisation.
While the board waited for Altman to decide on the matter, it chose to keep the details about his potential termination confidential. Again, by keeping the deliberations as quiet as possible, the board was likely trying not to give Altman a chance to use his leverage to sabotage their plan. That kind of preparation suggests the board had a mission-first mentality. So why, then, were things played out so badly immediately after the decision was made? Why was there such hue and cry, such confusion, and such a quick backpedalling to not terminate Altman?
One of the most damning allegations centres on information around the launch of ChatGPT, and around miscommunication about OpenAI’s safety policies. Toner’s revelations demonstrate that there is a systemic issue at OpenAI: the lack of trust between the board and the CEO resulted in a very significant obstacle to the company’s ethics review and oversight, as well as the realisation of its mission of the ‘public good’.
But the story dives deeper into Altman’s history, building a picture of someone who was disruptive throughout his career at OpenAI, and not just his final brash departure. He was also fired from a previous startup called Loopt while some accuse him of orchestrating an opportunistic campaign to expel Y Combinator’s co-founder, Paul Graham.
In response to these events, OpenAI has recently made some major organisational changes – a signal that it might be at last reaching an important turning point in its model for AI for good. A new code of conduct and restructured leadership model represent a step toward reconciling OpenAI’s ideals with its realities, a necessary step for restoring its reputation in AI circles and among the public.
The strong reactions from OpenAI workers and the broader AI community show that there is growing expectation for the emerging field of AI to be more transparent, accountable, and operated ethically. It is a sign that the technology needs to walk its talk about working to the betterment of humanity.
In the wake of the controversy, it’s now again the safety and ethics commitments of OpenAI that are important to emphasise. With Altman at the helm of the new safety and security team, OpenAI finds itself juggling the complexities of building an open research culture that can manage the ethical dilemmas that come with increasingly strong AI – all while making sure everyone is accountable, and that everyone can speak freely.
The word ‘advance’ captures the story of OpenAI from the moment the volcano blew its top to the moment it was re-fired and re-constellated, to the ongoing movement toward ethical stewardship, open governance, and reinvigoration of OpenAI’s founding mission to ensure that AI benefits all of humanity. It is a metaphorical advance toward becoming a more mature organisation that has learned from its mistakes and embodies the cultural ideals that AI aspires to.
The lessons from OpenAI’s recent turmoil indicate a vital step forward in the pursuit of a future technology revolution in which concepts such as integrity and openness, the public good, and the utmost care for others keep AI’s largest and most powerful organisations accountable. We are at a crucial crossroads. The story of OpenAI’s leadership controversy is a wakeancode call to AI organisations and a reminder about the ethical nature of our obligations to the public interests and to the values that define us as human beings.
© 2024 UC Technology Inc . All Rights Reserved.