This week in AI: Billionaires talk about automating jobs
Hi, people, welcome to the TechCrunch AI regular newsletter. If you want this in your input mail every Wednesday, sign up hereS
You may have noticed that we missed the newsletter last week. The reason? AI’s chaotic news cycle made even more adminal than Chinese company AI Company Deepseek suddenly to fameAnd the response to the practically the angle of the industry and the government.
Fortunately, we’re on the road again – and not too soon, given the development of Newsy from Openai last weekend.
Openai’s CEO Sam Altman stopped in Tokyo to chat on stage with Masayoshi’s son, CEO of the Japanese conglomerate Softbank. Softbank is a major openai Investor and partnerhave promised to help the fund The massive project for infrastructure of OPENAI data centers in the United States
So Altman probably felt he owed a son for a few hours of his time.
What did the two billionaires talk about? Many abstraction works away through AI “agents” on second -hand reporting. Son said his company will spend $ 3 billion a year on Openai products and will unite with Openai to develop a Cristal (SIC) intelligence platform to automate millions of traditional white -collar work flows.
“By automation and autonomization of all its tasks and work processes, Softbank Corp. will transform its business and services and create a new value, “says SoftBank in a Press Message on MondayS
However, I ask what is it a modest worker to think about all this automation and autonomy?
Like Sebastian Siemianovsky, CEO of Fintech Klarna, who often praise for AI replacing peopleThe son seems to be of the opinion that worker agents can only precipitate fabulous wealth. The price of abundance is secured. If the widespread automation of workplaces is obtained, Unemployment on a huge scale seems toS
It is a detriment that those in the AI ​​competition – companies such as Openai and investors such as Softbank – choose to spend press conferences, drawing a picture of automated corporations with less pay workers. They are companies, of course – not charity organizations. And AI development is not cheap. But maybe people would trust AI If those who lead its deployment have shown a little more concern for their well -being.
Food for reflection.
News
Deep research: Openai is launching a new AI agent designed to help people conduct in -depth, complex research using Chatgpt, AI -driven chatbot platform.
O3-Mini: In other Openai news, the company launches a new model of AI, O3-Mini, after visualization last December. This is not the most powerful model of Openai, but the O3-Mini is proud of improved efficiency and response rate.
I ban risks there: As of Sunday in the European Union, the unit regulators may prohibit the use of AI systems they consider to be “unacceptable risk” or harm. This includes AI used for social assessment and subconscious advertising.
Game for AI “Doomers”: There is a new game for the AI ​​”Doomer” culture based on the basis of Sam Altman is Procul as Executive Director of Openai In November 2023, my colleagues Dominic and Rebecca share their thoughts after watching the premiere.
Technologies for enhancing crop yields: Google’s Factory Moonshot this week announced its last graduate. Hereditary farming is a startup managed by data and machines aimed at improving how crops are grown.
Research paper for the week
Reflection models are better than your average AI in solving problems, in particular for science and mathematics requests. But they are not silver bullets.
A A new study by researchers from the Chinese company Tencent He explores the issue of “embarrassment” in models of reasoning, where models prematurely, inexplicably abandon their potentially promising thought. According to the results of the study, “approaching” models tend to occur more often with more difficult problems, leading to models to switch between reasoning chains without reaching answers.
The team offers an amendment that uses a “sentence to switch thought” to encourage models to “carefully” develop every line of reasoning before looking at the alternatives, enhancing the accuracy of the models.
The model of the week

A team of researchers supported by Tiktok Bytedance owner, Chinese AI Company Moonshot and others have released a new open model capable of generating relatively high quality prompt music.
The model called YueIt can bring a song within a few minutes with a length of vocals and archiving songs. It is under the Apache 2.0 license, which means that the model can be used in the commercial network without restriction.
However, there are disadvantages. Running Yue requires Beefy GPU; The generation of a 30-second song takes six minutes with the NVIDIA RTX 4090. It is also unclear whether the model is trained using copyright protected data; His creators did not say. If it turns out that copyright songs have really been in the model training set, users can face future IP challenges.
Get a bag

AI Lab Anthropic claims to have developed a technique for more reliable protection than AI “Jailbreaks”, the methods that can be used to circumvent AI AI safety measures.
The technique, Constitutional classifiersIt relies on two sets of AI models “Classifier”: Login Classifier and Classifier “Output”. The input classifier adds prompts to a protective pattern with templates describing jailbies and other prohibited content, while the output classifier calculates the likelihood that a model respond discusses harmful information.
Anthrop says that constitutional classifiers can filter the “overwhelming majority” of Jailbreen. However, this comes at a price. Each application is 25% more demanding in calculations, and the protected model is 0.38% less likely to answer harmless questions.