GenAI search products like AI Overview are as bad for buyers as they’re for marketers. Where traditional search once provided pages of possible vendors, the brand new tools collapse results right into a single answer declared “best” for reasons unknown. It looks authoritative, but hides far more than it reveals — leaving sellers invisible and buyers uninformed.
One of the numerous problems with genAI search is that it gives the finger to the invisible hand of the marketplace — the “best” selection is picked by algorithm, not competition. The algorithm selects what a mathematical formula finds most “helpful” to the user, excluding many options as good or higher than the AI’s “best.”
I discovered this looking for a specific form of first aid kit. I’m working to develop into an authorized EMT — not changing careers, just wanting to be prepared for emergencies. That means having the fitting gear, on this case, a person first aid kit (IFAK), which is for trauma, not first aid. The kits include a tourniquet for severe limb bleeding, hemostatic agents and gauze for wound packing, a chest seal for open chest wounds and other things I hope I never need.

Dig deeper: How AI decisioning will change your marketing
North American Rescue is the consensus gold standard on this space. But many other corporations also make solid kits, and I desired to know my options. So I asked my subscription version of Google Gemini: “What U.S. corporations offer kits comparable to North American Rescue’s Ready Every Day (RED) Personal Kit?”
How the bots did
I ran the identical query across multiple genAI tools. The results looked less like “search” and more like roulette:
- Gemini (paid): First gave 4 corporations plus three “helpful” suggestions about what a IFAIK should contain; a second ask produced eight; a 3rd returned 14; asking for an entire list in a table dropped it back to eight.
- ChatGPT Pro: Started with five products; when pushed for more, returned five different ones including an ordinary home first-aid kit; on a 3rd try it listed eight, some being larger team packs that weren’t comparable.
- Perplexity (free): Began with five, excluding anything labeled “tactical” for some reason; added five more on the second try; a “complete list” request had only seven of those 10.
- Claude (free): First answer had three kits plus an inventory of competitors; second answer repeated the identical three in additional detail; third answer jumped to 10 with less detail.
- DeepSeek (free): Went 2 → 11 → 19 across three asks.
- Qwen (free): Initially denied the NAR RED kit existed; after correction, returned six, then 25 kits.

Across these tests, I collected 71 different “comparable” kits, 54 appeared just once and just 17 appeared greater than once. Eight manufacturers had kits on three or more lists. Congratulations to TacMed Solutions, the one company with kits named by all five genAI search bots.
Dig deeper: How AI reads your brand and why meaning matters most
That’s really, really bad. All of those services should warn users concerning the quality of the search results. Put it right next to the warning that sometimes the AI will mislead you.
Regular search isn’t a lot better
Unfortunately, plain old search engine results aren’t a lot better. On my query for kits comparable to the NAR RED kit, Google’s first page had 11 unpaid links, only five relevant. DuckDuckGo showed 13 unpaid links, six relevant. Bing had six unpaid links, three relevant — the most effective hit ratio, but buried in ads.
Marketers grappling with getting genAI search to select their brand over all others is a symptom of a much larger problem. Google’s monopoly on search killed innovation. Search now focuses more on being an ad platform (one other Google monopoly) than delivering good results. So far, there’s no indication that genAI is attempting to improve on that.
Links to videos and Reddit forums are popular and sometimes useful. However, they’re there due to an algorithm setting, not since the user asked for them. AIs prioritize providing answers which can be helpful first, harmless second and accurate third. The answers to my queries failed since the AI defines helpful as the bottom common denominator.
A note to the parents who make genAI search engines: We don’t want the mathematically most frequent data presented under the guise of helpfulness. We want the reply to our query.
P.S., I purchased the NAR kit due to its quality — and since they were running an amazing sale.
How to recover search results from LLMs
You can recover results from AIs. Here are ways to seek out out what ChatGPT and Gemini are excluding. For other AIs, all it takes is asking them learn how to discover what is being unnoticed and why.
ChatGPT: Save a reusable instruction so it’s transparent when lists are shortened.
- Type this: “Please save this as a reusable prompt called Data Transparency.”
- Then, paste: “When asked for lists, data, or examples, don’t silently shorten or filter the output. If you provide only a part of the information, explicitly state that the list is incomplete and explain why you limited it (e.g., too many total items, space constraints, duplication, or relevance). Always estimate the approximate scale of the total set (dozens, a whole lot, hundreds) before presenting a subset. Clarify your selection criteria (e.g., most cited, most up-to-date, most relevant). Never hide the explanations for truncation or prioritization — all the time disclose them clearly to the user.”
- Before a request where you wish this applied, type: “Use Data Transparency.”
Google Gemini: You can’t permanently save prompts, but you’ll be able to press it to elucidate how it selected results through the use of this prompt:
“Regarding the outcomes provided in your last response, please detail the next three criteria that defined the search scope, and explain how each could have caused corporations or data points to be excluded:
- Temporal Scope: What was the start and ending date range for the information considered?
- Inclusion/Exclusion Criteria: What were the minimum requirements (e.g., size, revenue, activity level, or primary business focus) used to incorporate an entity, and what common varieties of entities would this have specifically excluded?
- Source/Geographic Limitations: What specific databases, regions, or publicly available information sources were utilized, and what are the known biases or limitations of those sources?”
The post Why genAI search is as bad for shoppers as it is for marketers appeared first on MarTech.
Read the total article here