OpenAI accuses Sam Altman of poor communication and a lack of frankness with his board of directors. Others see it as “factionalism”. What happened to cause the most prominent company in artificial intelligence to come to this eviction?
The day after the dismissal of Sam Altman as CEO of OpenAI by its board of directors, suspense is at its height. On the web, reactions and hypotheses are multiplying within the technological community on the reasons for an announcement which stunned everyone, and which will certainly remain as the most unexpected of the year.
The sequence was particularly brief. “ [Jeudi soir], Sam received a text from Ilya asking him to talk Friday lunchtime. Sam joined a Google Meet and the entire board except Greg was there. Ilya told Sam that he was fired and the news would be out very soon”explained on the social network
Just a few minutes were enough for Sam Altman, then Greg Brockman, then the media and the whole world to get wind of this surprise decision. As for Microsoft, which owns 49% of OpenAI and is closely linked to its projects, its boss Satya Nadella would have learned the news just one minute before the publication of the press release. A development that alone constitutes a problem – or an explanation of the problem. How could such a decision be so brutal, not to say absurd?
Ilya Sutskeve, the man who tipped the scales?
For Andrew Côté, the founder of the AI Salon in San Francisco, the most likely explanation would be a reverse factionalism within management and co-founders. “It is unlikely that this departure is related to anything regarding operations, cash burn, ability to establish partnerships,” he argued before adding that “OpenAI has a strong ability to raise funds, they have huge cash flow and only a few hundred employees.”
Behind what he means by factionalism, there would then have been two groups within the board of directors: “on one side Sam, Greg, and Ilya, against Adam, Tasha and Helen”. A balanced standoff (3 against 3) in recent months, until Ilya Sutskeve, according to Andrew Côté, suddenly tipped the scales. A change of team following a disagreement over a major breakthrough at OpenAIwhich Sam Altman mentioned as recently as Thursday at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco.
Ilya Sutskeve, scientific director of OpenAI who was responsible at the start of the year for monitoring the risks of societal damage from AI, would not have wanted to follow Sam Altman and Greg Brockman in their project to market new software too quickly ( related to GPT-5?). “I’d bet Ilya was the vote that broke them all out and led to Sam’s departure at the same time. […] It was most certainly a divide between supporters of security and supporters of acceleration.added Andrew Côté.
Shortly after the ousting, an internal meeting was organized to answer questions from employees, shocked by the consecutive departure of Sam Altman and Greg Brockman. The Informationwho was able to question certain participants, confirmed that a divide did exist and that the leaders had to choose “between emphasizing profits rather than safety”. Ilya Sutskeve wanted to be reassuring, indicating that this reshuffle would thus allow “to feel closer” – a decision which would therefore not be a “coup d’état” or a “hostile takeover”, as some fear.
Short or long term divide?
OpenAI is the current showcase of artificial intelligence for the general public. But it is not an “established” company for all that. The strategic choices it makes can therefore take it in very diverse directions, all the more influenced by investors, States, and questions of ethics. A dispute over long-term strategy could also be a possible explanation for the rift that led to Sam Altman’s ouster.
Unless the “lack of frankness” put forward in the OpenAI press release and criticized by its CEO results from a more short-term project? A lie?
Other hypotheses are in fact put forward and one concerns the next financing negotiated by OpenAI. The company is in the process of preparing a new fundraising and some imagine Sam Altman having led, avoiding his board of directors, a project for a major transaction. The other members could then have felt excluded, or strongly disagreed with the very intention behind the project. Deeper inclusion in Microsoft control? An external project?
Finally, the hypothesis of the incident. Before his fall from the leadership of OpenAI, Sam Altman was focused on suddenly stopping registrations for ChatGPT Plus, due to overwhelming demand. A pause, not for economic or security reasons, but rather a technical pause, something which is not foreign to the artificial intelligence industry as the programs require large resources.
However, a few days earlier, OpenAI suffered another similar shutdown, this time at Microsoft. The suspension was brief, but for the time being, the company had clearly stated the reason for a security problem in a press release. “Due to security and data concerns, a number of AI tools are no longer available to employees” we could read on November 9. If Sam Altman had downplayed the incident in his communication, it could have led the rest of the board to ask questions and break their trust.