On Monday, OpenAI declared a change in its approach to AI safety and security, establishing an independent Safety and Security Committee. This council, which has guided OpenAI's operations since its inception in May, will now oversee all areas of the company's AI model development and deployment.
The newly formed Safety and Security Committee at OpenAI, chaired by Zico Kolter, a Carnegie Mellon University professor, will independently assess and improve the company's safety safeguards. This decision follows the committee's suggestions to improve OpenAI's safety policies in response to the rapid emergence of AI technologies such as ChatGPT, which has aroused broad interest and debate regarding ethical concerns and potential biases.
As part of its new initiatives, OpenAI is considering establishing an ‘Information Sharing and Analysis Centre (ISAC) for the AI industry.’ This proposed centre seeks to enable the exchange of threat intelligence and cybersecurity information among AI firms, hence improving the industry's overall security posture. OpenAI also intends to strengthen its internal security operations and increase transparency about the capabilities and hazards connected with its AI models.
Recently OpenAI signed an agreement with the United States government to collaborate on the research, testing and evaluation of its AI models to tackling safety and security concerns proactively.