OpenAI Launches Safety Committee Amid New Model Training Initiative

The committee's first 90-day task: assess and improve OpenAI's safety protocols for projects and operations, advising the board accordingly.

New York: OpenAI announced on Tuesday the establishment of a Safety and Security Committee, to be spearheaded by CEO Sam Altman, coinciding with the initiation of training for its forthcoming artificial intelligence model. According to a company blog post, the committee will also be led by directors Bret Taylor, Adam D’Angelo, and Nicole Seligman.

Formerly at the helm of Microsoft-backed OpenAI’s Superalignment team, ex-Chief Scientist Ilya Sutskever and Jan Leike departed from the company earlier this month, leaving behind a vacuum in leadership. The Superalignment team, tasked with ensuring AI alignment with intended objectives, was disbanded by OpenAI in May, less than a year since its inception, with some members being reassigned, CNBC reported in the wake of the high-profile exits.

Charged with advising the board on safety and security matters concerning OpenAI’s projects and operations, the committee’s primary objective in its initial 90 days will be to assess and enhance the organization’s existing safety protocols. Subsequently, it will present its recommendations to the board for evaluation.

OpenAI has committed to transparency, stating its intention to publicly disclose updates on implemented recommendations following the board’s review.

Also Read | Airfare Dynamics: Europe and Asia Adapt to Shifting Travel Costs in Post-COVID Era

In addition to Altman and the aforementioned directors, the committee comprises technical and policy experts Aleksander Madry, Lilian Weng, and alignment sciences head John Schulman. Newly appointed Chief Scientist Jakub Pachocki and security head Matt Knight will also serve on the committee.

Recent News