This story is exclusively available to subscribers of Business Insider. To gain full access and start reading now, consider becoming an Insider.

Have you ever stumbled upon a quirky phrase that left you scratching your head? Take, for instance, the peculiar saying, "You can't lick a badger twice." While it may sound like an established idiom akin to the well-known adage, "A bird in the hand is worth two in the bush," it is, in reality, a concoction of modern creativity. And frankly, there's nothing stopping you from attempting to lick a badger as many times as you desirethough I strongly advise against it.

Before I go further, I should clarify that Im sure the legal team at Business Insider would prefer that I caution readers about the potential dangers of interacting with wildlife. We absolutely cannot be held accountable for any health risks, such as rabies, that may arise from such encounters.

Now, the phrase in question might not ring a bell for many, and thats because, unlike phrases such as "rings a bell," it isnt a recognized sayingor idiomin the English lexicon.

However, Googles AI Overview seems to think otherwise. It can readily provide a detailed interpretation of this made-up phrase. This amusing revelation was shared by a user on Threads, sparking curiosity. The idea is simple: you type in any random sentence into Google and append the word meaning at the end. The AI responds with a coherent explanation, as evidenced by Greg Jenner, a British historian and podcaster, who decided to test this phenomenon.

Jenner encountered discussions about this quirky search behavior on Threads and felt inspired to try it himself. He whimsically crafted the badger phrase, which he admitted just popped into my head. To his surprise, when he ran a Google search, the response he received seemed quite reasonable and fitting.

Intrigued by this, I decided to embark on my own quest for nonsensical phrases. I created a few fake idioms, like "You can't fit a duck in a pencil," and added meaning to my search query. Google, taking me at my word, promptly provided an interpretation that was equally comical.

In my subsequent attempts, I came up with another playful phrase: "The road is full of salsa," which I would love to see used in casual conversation. In response to my search, Google's AI system provided yet another amusing explanation.

In a conversation with a Google spokesperson, Meghann Farnsworth, it was explained that the AI systems are designed to assist users in finding what they want. However, when individuals engage in playful or misleading searches, the AI sometimes struggles to maintain accuracy.

Farnsworth elaborated, saying, When users perform nonsensical or 'false premise' searches, our systems will attempt to generate the most relevant results based on the limited content available on the web. She further explained that the AI Overviews are intended to offer helpful context, but they are not infallible.

This naturally leads to the question of what happens when AI encounters such whimsical requests. The answer, in short, is that while Google aims to limit AI responses to recognized queries, the system does not always succeed, revealing a gap in its capability to handle creative or absurd inputs.

In the spirit of experimentation, I decided to try one last imaginary phrase: "Don't kiss the doorknob." Google's AI, in its quest to be helpful, again generated a meaning that offered a glimpse into the playful nature of language.

Now, lets discuss the broader implications of this phenomenon:

  • The Good: English is replete with idioms such as kick the bucket or piece of cake. Many individuals, particularly those learning the language, may find these phrases confusing. Thus, the AI Overview can serve as a valuable resource, helping users quickly understand unfamiliar expressions without needing to click through various links.
  • The Bad: Given that AI should excel at recognizing idiomshaving been trained on extensive written materialit's troubling that it sometimes falters in providing accurate meanings. When a user types in a made-up phrase, they expect clarity, not ambiguity.
  • The Ugly: Comparatively, other AI platforms like ChatGPT seem to have a better grasp of this task. When I queried it about the badger saying, it correctly identified it as a non-standard idiom, offering a playful definition as if it were real. This discrepancy highlights that the problem might be specific to Googles approach rather than an industry-wide issue.

This situation creates a stark contrast to previous AI Overview challenges faced by Google, where search results pulled in content from platforms like Reddit, often misinterpreting sarcasm. Remember the time when it suggested that people should consume rocks for their mineral content or put glue on pizza due to humorous posts? Such instances underline the importance of critically evaluating AI-generated information.

While this trend of crafting imaginary phrases may seem lighthearted and humorous, it underscores significant concerns regarding the integration of AI into our daily internet usage. As we become increasingly reliant on AI for information, the risk of spreading misinformation grows. Although its likely that AI search capabilities will improve over time, we must navigate the awkward phase of experimentation fraught with potential inaccuracies.

AI has already begun reshaping our lives in profound ways. As we embrace these technological advancements, we must acknowledge that there is no turning back; the proverbial horse has already left the barn. Or, as one might say in light of this amusing exploration, you cant lick a badger twice.