OpenAI has announced the formation of a new Safety and Security Committee as it gears up to train its next artificial intelligence model.
This committee, led by board members including CEO Sam Altman, directors Bret Taylor, Adam D'Angelo, and Nicole Seligman, will oversee the company’s safety protocols and security measures. The initiative comes as OpenAI’s generative AI capabilities, such as chatbots capable of human-like conversations and creating images from text prompts, continue to raise safety concerns due to their increasing power.
The newly formed committee will be tasked with evaluating and enhancing OpenAI’s existing safety practices over the next 90 days. Their findings and recommendations will be presented to the board, and OpenAI has committed to publicly sharing updates on the adopted measures thereafter.
This development is a direct response to the disbandment of OpenAI’s Superalignment team earlier this month. The team, which was initially created to ensure AI systems remained aligned with their intended objectives, saw its leaders Ilya Sutskever and Jan Leike depart the company. Some former team members have been reassigned to other roles within OpenAI.
The committee also includes Jakub Pachocki, the newly appointed Chief Scientist and Matt Knight, the Head of Security. To support their efforts, OpenAI will consult with external experts such as Rob Joyce, a former cybersecurity director at the US National Security Agency, and John Carlin, a former Department of Justice official.