I shipped a feature that nobody used. Every test passed. The code worked. The feature was clearly visible in the codebase. And our adoption metrics didn't move. For weeks I assumed the feature wasn't valuable. Then a customer described, in detail, the exact problem the feature solved, and asked when we were planning to build it. We had built it. Six weeks earlier. They couldn't find it.
I spent the next month watching session recordings. Users hit the page that should have led them to the feature, looked at the screen for several seconds, and left. The link was there. It was just one of fourteen things on a busy page, with the wrong label, behind the wrong icon. The path through the application that we had imagined was not the path users took. And nothing in our test suite (none of the unit tests, integration tests, or end-to-end Playwright runs) would ever have flagged this. They were testing the wrong question.
Conventional testing answers "given a sequence of steps, does this code behave correctly." That is necessary. It is not sufficient. The question I needed answered was different: "starting from the homepage, can a user reach this feature at all." There was no off-the-shelf tool that would tell me. So I built one.
The first version was a script that opened a headless browser, clicked things, and tried to find a target page. It failed. I rebuilt it as an AI agent that explored the application like a real user, reasoning about what it saw, building a map of features and paths. That worked. We pointed it at the application I had just shipped the invisible feature on. The Navigation Map made the problem obvious in thirty seconds. The fix took an afternoon.
What Glia Quest does
Glia Quest tests your web application the way your users experience it. It navigates the UI, records every feature it can reach and how it got there, and produces a Navigation Coverage Score showing what fraction of your product is actually discoverable. It needs no SDK, no code access, no test scripts. You give it a URL. It gives you back the map your users are using. For every unreachable feature, the report includes an investigation prompt you can paste into your AI coding assistant to find and fix the navigation gap.
The output is not a verdict on whether your code works. The code works. The output is a verdict on whether your interface communicates what the code does, to a user who has never seen your codebase, never read your documentation, and has thirty seconds of attention to spare.
The Glia portfolio
Glia Quest is the third product from Glia Technology, joining a portfolio of four production web applications used by paying customers. Every feature of Glia Quest is tested first against our own portfolio. If we ship something that introduces a navigation gap in one of our own products, the tool tells us before our users do.