close
close

OpenAI is quietly revising a policy document to remove references to “politically neutral” AI

OpenAI is quietly revising a policy document to remove references to “politically neutral” AI

OpenAI has quietly removed language endorsing “politically neutral” artificial intelligence from one of its recently published program documents.

in original project In its “economic plan” for the US AI industry, OpenAI noted that AI models “must be politically neutral by default.” A new draft released Monday removes that phrase.

When reached for comment, an OpenAI representative said the redaction was part of an effort to “optimize” the document, and that other OpenAI documentation, including the OpenAI model specification, “emphasizes objectivity.” The Model specificationwhich OpenAI released in May, aims to shed light on the behavior of the company’s various AI systems.

But the review also points to the political minefield that has become the discourse around “biased AI.”

Many allies of President-elect Donald Trump, including Elon Musk and cryptocurrency and AI czar David Sachs, have accused AI chatbots of censorship of conservative views. He has bags singled out In particular, ChatGPT OpenAI as “programmed to wake up” and untruthful about politically sensitive topics.

Musk blamed both the data on which AI models are trained and the “awakening” of companies from the San Francisco Bay Area.

“A lot of the AI ​​that’s being trained in the San Francisco Bay Area, they’re taking on the philosophy of the people around them,” Musk. said at an event sponsored by the government of Saudi Arabia last October. “So you have an awakened, nihilistic—I think—philosophy built into these AIs.”

In truth, bias in AI is an intractable technical problem. Musk’s AI company, xAI, has itself fought create a chatbot that doesn’t endorse one political view over another.

Report of British researchers published suggested in August that ChatGPT has a liberal bias on topics such as immigration, climate change and same-sex marriage. OpenAI has claimed that any biases that appear in ChatGPT “are bugs, not features.”