There is a class of search query that breaks every keyword index ever built. Not dramatically, not with an error message. The index just returns something plausible that misses the point, and you never know.
The query looks like this: "essays criticizing modern AI benchmarks." Or: "engineering blogs that explain ML intuitively." Or: "case studies of companies that migrated from monoliths to microservices." None of these contain the words you'd actually find in the content they're pointing at. The essays don't say "I am criticizing AI benchmarks." The engineering blogs don't announce that they explain ML intuitively. The case studies don't open with "here is a story about migrating from a monolith."
This is the core problem for AI agents doing conceptual research. It is not a rare edge case. It is the normal condition for any search that starts from an idea rather than from specific words.
This guide covers one job: finding information by meaning on the open web. What breaks when you try to do it with keyword tools, how semantic search actually works, who built the only genuine semantic index of the open web, how to write queries that get results, and what fails quietly even when the tool is the right one.
This piece is part of Garden's
complete guide to agentic AI search, which covers all eight jobs an agent does on the web and the infrastructure behind each.