OpenAI Forms Board Safety Committee Run by Sam Altman

OpenAI Forms Board Safety Committee Run by Sam Altman

Mihir Gadhvi 28/05/2024
OpenAI Forms Board Safety Committee Run by Sam Altman

OpenAI is establishing a new safety team, led by CEO Sam Altman, along with board members Adam D’Angelo and Nicole Seligman.

This committee is tasked with making critical safety and security recommendations for OpenAI’s projects and operations. The formation of this team follows several key departures from the company, raising concerns about the direction of OpenAI's safety protocols and priorities.

The primary task of the new safety team is to evaluate and further develop OpenAI’s processes and safeguards. This evaluation is crucial given the rapid advancements in AI technology and the growing concern about its potential risks. The team will present its findings to OpenAI’s board, which includes all three leaders of the safety team. The board will then decide how to implement these recommendations, aiming to bolster OpenAI’s commitment to AI safety.

The creation of this safety team comes in the wake of significant internal changes at OpenAI. Notably, the departure of Ilya Sutskever, OpenAI co-founder and chief scientist, who played a pivotal role in the company's Superalignment team. This team was formed to develop methods to steer and control AI systems that surpass human intelligence. Sutskever supported a controversial board move to remove Altman last year, a coup that ultimately failed but left lasting impacts on the company’s internal dynamics.

Sutskever’s exit was followed by the departure of Jan Leike, his co-leader on the Superalignment team. Leike’s resignation post on X (formerly Twitter) highlighted his concerns, stating that safety at OpenAI had been overshadowed by the pursuit of new and flashy products. This sentiment echoes the worries of many in the AI community who believe that safety and ethical considerations are being sidelined in the race to develop cutting-edge AI technologies.

Additionally, OpenAI policy researcher Gretchen Krueger also resigned, citing similar safety concerns. Her departure adds to the growing list of key personnel who have left the company, indicating deeper issues within OpenAI regarding its commitment to safety and ethical AI development.

In response to these concerns, OpenAI has not only formed the new safety board but is also actively testing a new AI model, though it has not confirmed if this is the anticipated GPT-5. This development underscores the dual focus of OpenAI: advancing AI technology while attempting to address safety and ethical considerations.

Earlier this month, OpenAI introduced a new voice for ChatGPT, named Sky, which sparked controversy due to its resemblance to actress Scarlett Johansson’s voice. CEO Sam Altman had hinted at this similarity on X, leading to widespread speculation and concern. Johansson later clarified that she had declined any offers to provide her voice for ChatGPT. Altman responded by stating that OpenAI never intended for Sky to sound like Johansson and that he reached out to her only after casting the voice actor. This incident has heightened scrutiny over OpenAI's practices and transparency, leaving both AI enthusiasts and critics uneasy.

The new safety team includes several prominent figures within OpenAI: Aleksander Madry, head of preparedness; Lilian Weng, head of safety; John Schulman, head of alignment science; Matt Knight, head of security; and Jakub Pachocki, chief scientist. Despite the impressive lineup, the involvement of two board members and Altman himself has led to skepticism about whether OpenAI is genuinely addressing the concerns raised by former employees and the broader AI community.

OpenAI's recent turmoil highlights the challenges faced by organizations at the forefront of AI development. The rapid pace of innovation often comes with significant risks, and balancing progress with safety is a delicate act. The departures of key personnel, particularly those involved in safety and alignment, suggest a potential misalignment of priorities within the company.

The resignation of Sutskever, Leike, and Krueger points to a broader issue of trust and confidence in OpenAI’s leadership and direction. Their concerns about safety taking a backseat to product development resonate with many in the AI field who fear that the race to develop the next big thing in AI could lead to oversight in critical safety areas.

OpenAI's response, in the form of the new safety team, is a step towards addressing these concerns. However, the effectiveness of this team will depend on its ability to operate independently and make impactful changes. The presence of Altman, D’Angelo, and Seligman on the board could be seen as both a strength and a potential conflict of interest, depending on how they navigate their dual roles.

Moving forward, OpenAI must prioritize transparency and open communication with both its internal team and the public. The controversy surrounding the Sky voice actor underscores the importance of clear and honest dialogue to maintain trust. Ensuring that safety remains at the forefront of their operations, alongside innovation, will be crucial in regaining confidence and setting a positive example for the industry.

The stakes are high in the field of AI, and the actions taken by leaders like OpenAI will shape the future of technology and its impact on society. By committing to rigorous safety protocols and addressing the concerns of its workforce, OpenAI has the opportunity to lead by example and foster a safer, more ethical approach to AI development.

Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • No comments found

Share this article

Mihir Gadhvi

Tech Expert

Mihir Gadhvi is the co-founder of illustrake and HAYD. Illustrake is a D2C Enabler and offers Performance Marketing, Retention Marketing, and Content Creation Services. HAYD is a brand New, homegrown fashion line that aims to make clothing easy for us without taxing our planet. Although the concept is quite known now, HAYD wants to accomplish sustainability by reducing its impact on the environment with safe and fair manufacturing.

Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics