OpenAI has announced the commencement of training its new "frontier model" and the formation of a Safety and Security Committee. This committee, led by board members Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman, will evaluate and develop OpenAI’s processes and safeguards over the next 90 days. Recommendations will be shared with the full Board and publicly updated afterward.
90-Day Evaluation Period
The 90-day period is a standard business metric for evaluating processes. By the end of this period, the Safety and Security Committee will present its findings and recommendations to the Board. This timeline suggests that any new model, potentially GPT-5, will not be released until at least August 26, 2024.
Criticism of the Committee
The committee has faced criticism for being composed entirely of OpenAI executives, raising concerns about the independence of the evaluation. OpenAI's board has been controversial in the past, notably when Sam Altman was temporarily removed as CEO in November 2023, only to be reinstated after employee and investor backlash.
Recent Developments at OpenAI
- Release of GPT-4o: OpenAI's latest model, GPT-4o, was released earlier this month. It is the first to be trained on multimodal inputs and outputs from the start.
- Public Relations Issues: The company faced criticism from actress Scarlett Johansson over voice imitation concerns and internal resignations, including co-founder Ilya Sutskever.
- Employee Agreements: OpenAI was criticized for restrictive non-disparagement agreements but has since decided not to enforce them.
- New Partnerships: Despite challenges, OpenAI has secured new partnerships in mainstream media and seen interest in its Sora video generation model.
Industry Context
Rival Google has also faced backlash over its AI Overview answers in search, indicating a broader skepticism towards generative AI. OpenAI's new safety committee aims to address these concerns and reassure regulators and potential business partners ahead of future model releases.