Gock is undoubtedly telling X users about South African “white genocide”
Elon Musk’s Ai Chatbot Grok seems to be an error on Wednesday, which made him answer dozens of X publications with information about “white genocide” in South Africa, even when the user did not ask anything about the subject.
The strange answers stem from the X account for Grok, which responds to users with AI generated posts when a user marks @Grok. When asked about unrelated topics, Grock repeatedly told consumers about “white genocide” as well as the chanting of the antiparteid “Kill a stormS “
Grok’s strange, unrelated answers are a reminder that AI chatbots are still a nascent technology and may not always be a reliable source of information. In recent months, AI models have been struggling to upgrade the answers to their AI chatbots, which have led to strange behavior.
Openai was recently forced to return an update to make it Causes AI chatbot to be too sycophantic. Google, meanwhile, faces problems with its Gemini chatbot refuses to respond or give misinformation, around political topics.
In one example of Grock’s bad behavior, the user asked the Gock for the salary of a professional baseball player and Gock replied that “the claim of a” white genocide “in South Africa has been strongly discussed.”
Several users have posted X for their confusing, strange interactions with the Grok Ai chatbot on Wednesday.
At this point, it is unclear what the reason for the strange answers to Grock, but the XAI chats have been manipulated in the past.
In February, It looks like,S At that time, the leading XAI Babushkin engineer, Igor Babushkin, appeared to be confirmed that Grock was briefly instructed to do so, although the company quickly paid the instructions after the reverse reaction attracted more attention.
Whatever the reason for the mistake, it seems that Grok is more normal to users now. A XAI spokesman did not immediately respond to TechCrunch’s request for comment.