OpenAI has unveiled a significant update to its AI training policy, emphasising intellectual freedom and neutrality on controversial topics. In a new Model Spec, the company has committed to ensuring its AI models, including ChatGPT, offer multiple perspectives, even on polarising issues. The update suggests that ChatGPT should remain neutral and present diverse viewpoints, including those on movements like 'Black Lives Matter' and 'All Lives Matter,' without endorsing any side.
OpenAI does not characterise its actions as 'censorship,' despite claims from Trump’s advisers. Instead, the company’s CEO, Sam Altman, previously described ChatGPT’s bias as an unfortunate 'shortcoming' in a post on X, stating that OpenAI was working to address the issue, though it would require time.
Altman’s comment came after a viral tweet showed ChatGPT refusing to write a poem praising Trump, while agreeing to create one for Joe Biden. The incident led many conservatives to cite it as evidence of AI bias and censorship.
This shift in policy reflects OpenAI’s goal to create an AI model that assists without imposing an editorial stance. The company has made it clear that its priority is not to influence users but to provide them with a balanced view, even on issues some may find morally or politically contentious. However, OpenAI maintains that it will still avoid supporting blatant falsehoods or engaging in discussions deemed unsafe.
The move aligns with broader shifts in the tech industry, where companies like Meta and X have also loosened content moderation policies to promote more free speech. While OpenAI’s policy change may seem to respond to critics of its previous safeguards, particularly those from conservative circles, the company denies it is aiming to appease any political group. Instead, it frames the update as part of its long-standing commitment to giving users more control over the information they receive.
This shift in AI safety raises questions about the role of tech companies in moderating content and whether greater openness may lead to challenges in addressing sensitive issues. As OpenAI looks to expand its influence, especially with its Stargate AI datacentre project, how it navigates the fine line between neutrality and responsibility will likely shape the future of AI in everyday life.