Many teams still plan and measure as if the main decision happens on their website.
In reality, AI summaries, rich snippets and other zero click results are now doing a large share of that work before anyone arrives.
This does not replace the website completely. Instead it reshapes the journey. People see compressed, opinionated versions of you and your competitors, often in a single block of content. By the time they land on a site, their expectations and shortlists are already formed.
This note is for product, marketing and digital teams who:
Have noticed behaviour changing around search and AI features
Still see most of their measurement and effort focused on on site journeys
Want a clearer view of where decisions are actually being made
This field note looks at how that shift appears in practice, why common responses do not work and what questions are worth asking before you plan another funnel change.
For a long time the working assumption was simple.
Roughly:
The upstream picture was untidy, but the site was treated as the main place the decision happened. Search and ads were there to send traffic. The site was where you persuaded, reassured and closed.
Most research, analytics and experimentation practices grew up inside that model.
AI summaries and rich search results change that model in a few important ways.
The result is that a significant part of the decision work now happens inside the AI panel. By the time someone reaches you, they may already:
Your website is now operating inside that frame, not defining it on its own.
Take a generic but common scenario.
Someone searches for a complex service. Instead of ten blue links and some ads, they now see:
They read the summary, skim a couple of the suggested providers and then click through to one or two sites to confirm or refine.
Notice what changed:
If your site is still designed as if visitors arrive blank and open minded, you are working against the current journey.
NOTE
In an AI-shaped journey, many visitors arrive in one of two modes: they are either confirming a shortlist decision, or they are pressure-testing it against competitors. Both modes are late-stage. That changes what the first page must do: reduce the work needed to validate fit, surface the criteria the upstream summary has primed, and make the next step obvious. The goal is not to ‘educate from scratch’, it is to make confirmation, comparison, and commitment low-friction.”
Most organisations do not have a clean way to measure this shift yet, but it often leaves traces:
None of these prove that AI summaries are responsible on their own, but together they are a strong signal that decisions are being shaped earlier and elsewhere.
EXAMPLE:
For example, one organisation saw visits to a key comparison page drop while direct visits to a single product page rose after an AI answer started recommending that product by name.
Once teams notice these patterns, there are a few familiar reactions. Most of them are not enough on their own.
Work is handed to SEO or content teams with a brief to “optimise for the AI answer”.
That can help at the margins, but it does not address the deeper questions:
Teams keep adjusting forms, buttons and page layouts, because that is where their tools and processes live.
This can improve conversion for people who are already committed. It does not help if:
Some organisations respond by adding more content and tools, trying to replicate the AI answer and every possible comparison locally.
This usually leads to complexity, not clarity.
If people are already arriving with a frame shaped by an AI summary, they do not need another generic explainer. They need to see how your reality fits or challenges that frame.
Instead of asking “how do we get more clicks from AI summaries”, it is often more useful to ask:
Those questions are uncomfortable, but they connect upstream reality to the work you control.
A practical starting point is a simple audit.
For a small number of important tasks or queries:
This is not about “writing for the algorithm”. It is about understanding the mental context your visitors now carry.
A site designed in an AI shaped journey often has to do three things well:
Make it easy for people to see that the useful parts of the AI summary are accurate and that you meet the criteria that matter.
Explain where the summary has oversimplified or missed important nuance, without lecturing the user.
Provide clear, evidenced explanations where the summary has misunderstood or outdated information about you.
That is a different job from “educate from first principles” or “hold the whole decision inside the site”.
From a Corpus perspective, AI summaries are one part of a wider upstream shift.
Our work usually starts by mapping:
The goal is not to chase every search change. It is to build a clearer understanding of where decisions are actually made, and then align on site work, experiments and investment with that reality.
