Advertisment

Lilian Weng departs OpenAI in latest shift among AI safety researchers

Her resignation comes after the exits of several other high-profile employees, including Ilya Sutskever and Jan Leike, who led the now-disbanded Superalignment team focused on developing safety protocols for superintelligent AI.

author-image
Social Samosa
New Update
s

Another prominent researcher is departing OpenAI, as Lilian Weng, the startup’s vice president of research and safety, announced her resignation on Friday after seven years with the company. Weng has served as VP since August and previously led the companys’ safety systems team.

In a post on X, formerly Twitter, Weng reflected on her tenure, writing, “After 7 years at OpenAI, I feel ready to reset and explore something new.” Her last day with the company will be 15 November, though she has not disclosed her next career move. “I made the extremely difficult decision to leave OpenAI,” Weng continued. “Looking at what we have achieved, I’m so proud of everyone on the Safety Systems team and I have extremely high confidence that the team will continue thriving.”

Weng’s departure follows a wave of exits from the company among safety researchers, policy analysts, and executives who, over the last year, have cited concerns about the company’s direction. Several departing employees have publicly questioned the company’s commitment to prioritising AI safety alongside the development of its commercial products. 

In 2023, Weng was tasked with leading the safety systems unit, a team now comprising more than 80 scientists, researchers, and policy experts dedicated to ensuring AI safety measures for the company expanding its technology portfolio. Weng’s career with OpenAI began in 2018 on the robotics team, which built a Rubik’s cube-solving robotic hand in a two-year project. She then transitioned to the company’s applied AI research team in 2021, aligning with the company’s shift toward language models like GPT-3 and GPT-4.

Her resignation comes after the exits of several other high-profile employees, including Ilya Sutskever and Jan Leike, who led the now-disbanded Superalignment team focused on developing safety protocols for superintelligent AI. Both left OpenAI this year to continue their work on AI safety at other organisations. 

Further departures include Miles Brundage, a policy researcher who announced in October that the company had disbanded its AGI readiness team, and Suchir Balaji, a former researcher who, according to the New York Times, left the startup because he believed its technology would cause more harm than benefit to society. In recent months, former CTO Mira Murati, chief research officer, Bob McGrew, research VP Barret Zoph, and co-founder John Schulman have also exited the company. Some of these individuals, including Leike and Schulman, have since joined OpenAI competitor Anthropic, while others have gone on to launch independent ventures.

The turnover at the company highlights the shifting priorities within the AI industry as companies like OpenAI pursue advanced AI capabilities at scale.



Open AI GPT-3.5 GPT 4 Open AI Chatgpt Lilian Weng Open AI OpenAI employees