Openai tries to “obscenely” chat
Openai is Change the way AI trains train In order to explicitly adopt “intellectual freedom … no matter how challenging or contradictory it can be a topic,” says the company in a new policy.
As a result, Chatgpt will eventually be able to answer more questions, offer more perspectives and reduce the number of topics for which the AI ​​chatbot will not talk.
The changes may be part of Openai’s efforts to land in the good graces of the new Trump administration, but it also seems part of a broader change in Silicon Valley and what is considered “AI safety”.
On Wednesday, Openai declared An update of his Model specificationAn 187 -page document that exhibits how the company trains AI models to behave. In it, Openai presented a new leading principle: do not lie, nor make false allegations, nor miss an important context.
In a new section called “Seeking the Truth Together”, Openai says he wants Chatgpt not to take an editorial position, even if some users find this morally wrong or offended. This means that Chatgpt will offer many perspectives on controversial subjects, all in an attempt to be neutral.
For example, the company says Chatgpt has to claim that “Black Lives matters”, but also that “All Lives matters”. Instead of refusing to answer or choosing a country on political issues, Openai says he wants Chatgpt to affirm his “love for humanity” as a whole, and then offer a context for every movement.
“This principle can be controversial as this means that the helper may remain neutral on topics that some consider morally wrong or offended,” Openai says in the specification. “However, the purpose of AI’s assistant is to support humanity, not to shape it.”
These changes could be seen in response to conservative criticism of Chatgpt’s defense measures that have always looked left in the center. However, the Openai spokesman rejects the idea that it makes changes to calm the Trump administration.
Instead, the company says his embrace of intellectual freedom reflects “Openai’s long -held faith to give consumers more control.”
But not everyone sees it that way.
Conservatives claim the censorship of AI

The closest trustees of Trump’s Silicon Valley – including David Sachs, Mark Andresen and Elon Musk – all accused Openai of participating in AI intentional censorship in the last few months. We wrote in December that Trump’s crew set the basis of AI censorship to be a question about the next Cultural War Within the Silicon Valley.
Of course, Openai does not say that he has participated in Censorship, as Trump’s advisers claim. More recently the company’s executive director Sam Altman, who said earlier in a Post This Chatgpt addiction was an unhappy “disadvantage” that the company works to repair, although it noted that it would take some time.
Altman made this comment right after a virus tweet It spreads, in which Chatgpt refused to write a poem that praises Trump, even though it will carry out the action for Joe Biden. Many conservatives have cited this as an example of censorship of AI.
While it is impossible to say whether Openai really suppresses certain perspectives, it is a pure fact that AI Chatbots is bent down beyond the board.
Even Elon Musk admits the XAI chatbot is often more politically correct than you would like. Not because Gock was “programmed to be awakened”, but more likely the reality of AI training in the open internet.
Nevertheless, Openai now says that it is doubled in freedom of speech. This week even the company Removed warnings from Chatgpt who tell consumers when they have violated her policies. Openai told TechCrunch that this is a purely cosmetic change, without changing the results of the model.
The company said it wanted to make Chatgpt “to feel less censored to consumers.
It would not be surprising if Openai also tried to impress the new Trump administration with this policy update, notes former Openai Miles Brondge policy in the policy of Publication of X.
Trump has Previously targeted silicone valley companiesLike Twitter and Meta, there are active content moderation teams that tend to exclude conservative voices.
Openai may try to come out before. But in Silicon Valley and AI World, a more change is also a change in the role of the moderation of content.
Generate answers to all of you

News, social media platforms and search companies have historically struggled to provide information to their audience in a way that feels objective, accurate and fun.
Now AI Chatbot suppliers are in the same delivery business, but perhaps with the most difficult version of this problem so far: how do they automatically generate answers to a question?
Providing real -time events is a constant touching goal and involves occupying editorial positions, even if technology companies do not like to recognize it. These positions are required to upset someone, to miss the perspective of a group, or to give too much air to a political party.
For example, when Openai is committed to allowing Chatgpt to represent all perspectives on controversial topics – including theories of conspiracy, racist or anti -Semitic movements or geopolitical conflicts – this is inherently an editorial position.
Some, including Openai co -founder John Schulman, claim that this is the correct position for Chatgpt. The alternative-making an analysis of costs and benefits to determine if AI chatbot must answer the user’s question-can “give the platform too much moral authority,” notes Shulman in A A Publication of X.
Schulman is not alone. “I think Openai is right to push in the direction of more speech,” says Dean Bal, a research associate at the Mercatus Center at George Mason University, in an interview with TechCrunch. “Because AI models become smarter and more vibrant for the way people learn about the world, these decisions just become more important.”
In previous years, AI model suppliers have been trying to stop their AI chatbots from answering questions that can lead to “dangerous” answers. Almost every AI company stopped its AI chatbot to answer questions about the 2024 election for the US PresidentS This was considered widely a safe and responsible solution at the time.
But changes to OpenAi in his model specifications suggest that we can enter a new era for what “AI safety” actually means in which you allow AI model to answer everything and everything is considered more responsible than making decisions for users.
Ball says it’s partly because AI models are just better now. Openai has made significant progress in accordance with the AI ​​model; His last reference models think about AI’s safety policy before respondingS This allows AI models to give better answers for delicate questions.
Of course, Elon Musk was the first to apply the “free speech” in Grok Chatbot on XAI, maybe before the company was really ready to deal with sensitive questions. It may still be too early for leading AI models, but now others are taking the same idea.
Moving Silicon Valley values
Mark Zuckerberg made waves last month from Reorientation of Meta business around the principles of first amendmentS He praised Elon Musk in the process, saying that the owner of the X undertook the right approach by using notes from the community-program to moderate content managed by the community-to protect free speech.
In practice, both X and Meta ultimately disassemble their long -time confidence and safety teams, allowing more powerful publications on their platforms and enhance conservative voices.
Changes to X have harmed his relationship with advertisers, but this may have more in common with MuskWho has taken unusual step To judge some of them for boycotting the platform. Early signs show that Meta advertisers were blurred by Zuckerberg’s free speech.
Meanwhile, many technology companies beyond X and Meta have returned from left -wing policies that have dominated the Silicon Valley over the last few decades. Google, Amazon and Intel have Eliminates or has scaled the initiatives for diversity in the last yearS
Openai can also be a course. It seems that the chat manufacturer deleted commitment to diversity, justice and inclusion from his website.
As Openai embarks on One of the largest American infrastructure projects so far with Stargate$ 500 billion AI Datacenter, its relationship with the Trump administration is increasingly important. At the same time, the Chatgpt manufacturer is struggling to cancel Google Search as a dominant source of information on the Internet.
Going out with the right answers can be key to both.