Some may wonder what this bit of apparent computer instructions signify; others might recognise that it refers to four days of drama in November, involving Sam Altman. Its opening act was the dismissal by the OpenAI Board of its founder-CEO. It ended with a turning of the tables where the Board itself resigned (was, in effect, sacked) and Mr Altman was reinstated in his former position. Within those 100 hours, Microsoft created (and, presumably, later dissolved) a special advanced AI research division to be headed by Sam Altman to take forward his work on Artificial General Intelligence (AGI). The saga ended, appropriately on Thanksgiving!
Open AI is best known for its creation ChatGPT, the generative AI based app which took the world by storm some months ago, crossing 100 million downloads in record time. While itself a not-for-profit, Open AI has set up a commercial subsidiary, in which Microsoft is the biggest investor. Ironically, Altman was informed of his firing on a call using Google Meet and not Microsoft Teams! Yet, if there is one person who has come out of this sordid saga as a knight in shining armour, it is Microsoft’s Satya Nadella.
Rumours are swirling that OpenAI was nearing a new breakthrough in AGI, one that could upend the human-machine relationship and is at a stage where the human species needs to be concerned. Was work on this an area where Altman had been “less than candid” in his communication with the Board (the stated reason for his dismissal)?
Scare scenarios abound – particularly in sci-fi stories and movies ‒ about machines and AI taking over the world and making humans subservient. Is new work in AGI taking us down this path? Do we need global ethical guidelines, possibly regulatory frameworks, to control this? In a far more complex situation in Gaza, a “humanitarian pause” has been negotiated; is one necessary in AI (as some scientists and tech leaders have themselves suggested)? Is this desirable – more crucially, is it even possible?
At a more pragmatic and immediate level, what are the learnings from the shenanigans regarding Open AI? Certainly, Boards need to re-look and possibly redefine their relationship with the CEO, especially where the CEO is the founder or promoter. Having – or creating – a “star” as the leader does a great deal for branding, especially where there is a charismatic CEO, but does the identification of the organisation with the person create complications?
Further, in this case, the Board itself was not quite cohesive, as seems evident by its firing the Chairperson. More importantly, CEO Altman’s vision was apparently not quite congruent with that of the Board. This may be the result (or, conversely, the cause) of the “less than candid” communication, pointing to inadequate dialogue between the Board and the CEO; worse, a breakdown of trust. These and other issues deserve attention, discussion, and pondering by all organisations. There are many lessons to be learnt for both, Boards and CEOs, if such wars are to be avoided.
Meanwhile, real wars continue to be waged in countries around the world. Given the continuing wars and hate spreading around the world, we seem to be in self-destruct mode. In this scenario, instead of trying to curb or pause AI development, maybe it is better to let AI take over?
The author loves to think in tongue-in-cheek ways, with no maliciousness or offence intended. At other times, he is a public policy analyst and author. His latest book is Decisive Decade: India 2030 Gazelle or Hippo (Rupa, 2021)