“You can’t lick a badger twice”: Google failures highlight a major disadvantage of AI
Here’s a nice little distraction from your work day: aim to GoogleEnter each fictional phrase, add the word “meaning” and search. Here it is! View AI on Google Not only will it confirm that your stupidity is a real saying, but it will also tell you what it means and how it was received.
It’s really fun and you can find a lot of Examples In social media. In the world of AI reviews, “The free dog will not surf” is “a playful way of saying that something will not happen or that something will not work.” The fictional phrase “Wired is as Wired is” is an idiom, which means that “the behavior or characteristics of someone are a direct result of their inherent nature or” wiring “, similar to the function of the computer is determined by its physical connections.”
Everything sounds completely plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the answer an additional shine of authority. It is also wrong, at least in the sense that the review gives the impression that these are common phrases, not a bunch of random words thrown together. And although it is stupid that AI review think “Never throw a pig poodle” is a proverb with a biblical derivative, it is also a tidy encapsulation of the place where the generative AI still lacks.
As a refusal at the bottom of all AI review notes, Google uses a “experimental” generative AI to power its results. The generative AI is a powerful tool with all types of legal practical applications. But two of its determining characteristics come into play when they explain these fictional phrases. First, it is a probability machine after all; Although it may seem that a large language-based system has thoughts or even feelings, it is simply at the base level of the most likely word after another, placing the track as a train forward. This makes it very good to come up with an explanation of what these phrases would They mean that if they mean something they do again, they don’t.
“The forecast of the next word is based on his huge training data,” says Ziang Xiao, a computer scientist at John Hopkins University. “In many cases, however, the next agreed word does not lead us to the right answer.”
The other factor is that AI aims to please; Research shows that chatbots often Tell people what they want to hearS In this case it means to take you to your word “You can’t lick the badger twice” a turn of the phrase is accepted. In other contexts, this may mean reflecting your own biases back to you, as a team of researchers led by Xiao demonstrated in a exploration Last year.
“It is extremely difficult for this system to take into account any individual request or leading questions of the user,” SIAO says. “This is especially challenging for unusual knowledge, languages in which there is significantly less content, and the prospects of minorities. Since AI Search is such a complex system, the mistakes of stunts.”