Experiments without a model

abi hough

Many teams now run constant experiments. Forms are tweaked, buttons are tested, copy is swapped. Dashboards fill up with charts and winners.

What is less clear is what all of this is supposed to be teaching.

Without a shared model of how people behave and why, experiments turn into a stream of local tweaks. Some changes help, some do not, but very little of it builds understanding. The same problems reappear in new places. Decisions remain hard to explain.

WHO IS THIS NOTE FOR
This note is for product, marketing and digital teams who:

Are running a steady volume of tests but cannot clearly state what they have learned this year

See local wins that do not add up to meaningful movement in the measures that matter

Feel pressure to “keep the tests running” even when the pipeline is thin on real questions


This field note looks at how experiments without a model show up in practice, why they are so common and what to do if you suspect your experimentation programme has drifted into busywork.


What a model is, and why it matters.

By a model we mean a simple, explicit view of:

  • Who you are designing for
  • What they are trying to do
  • How they currently behave
  • What you believe will change that behaviour and why

The model does not need to be perfect. It does need to exist.

Experiments are then ways of testing parts of that model. If you do not have one, tests tend to answer shallow questions:

  • “Does green beat blue here”
  • “Does this headline get more clicks”

The results are hard to apply anywhere else.

How experiments drift away from a model.

Even teams that start well can drift. Common patterns include:

  • Volume over clarity
    Success is measured by the number of tests run, not by what was learned.
  • Local ownership
    Individual teams or squads test in their own areas with little coordination.
  • Tool led ideas
    The experimentation platform or analytics tool suggests “opportunities” that become tests.
  • Isolated metrics
    Each test is judged by a narrow metric, with little attention to wider effects.

Over time, experiments become a series of unrelated micro changes. The backlog is full, but the story about users is thin.

How “experiments without a model” show up.

The symptoms are familiar.

  • You can list winning tests, but struggle to summarise what you have learned about users.
  • Stakeholders ask for tests to “prove” decisions that have already been made.
  • Teams argue about which metric should win when results conflict.
  • The same kinds of issues keep reappearing in different journeys.
  • Decisions eventually fall back on senior opinion, even with plenty of test results.

From a distance it looks like a busy programme. From close up it feels like turning a crank.

NOTE
A high volume of experiments can make an organisation feel evidence led. If no one can explain the assumptions behind those tests, it is not evidence. It is decoration.

Why common responses do not help.

When teams suspect something is off, there are a few usual responses.

01_ Tightening process, not thinking

More templates are added. Hypotheses are forced into standard wording. Review steps multiply.

Process discipline is useful. It does not create a model by itself. You can run very neat experiments that still answer shallow questions.

02_ Chasing higher win rates

Teams focus on test ideas that are likely to “win” and avoid those that may be neutral or negative.

This can improve local metrics, but often at the cost of learning. Many important questions do not have tidy positive outcomes in the short term.

03_ Centralising control

Ownership of experimentation is moved to a central team that approves or rejects tests.

That can reduce noise, but it does not guarantee that tests are linked to a shared understanding of behaviour. It can also slow things down without improving quality.

Better questions to ask.

Instead of asking “how do we run more tests”, it is often more useful to ask:

  • What are the key behaviours or decisions we are trying to influence
  • What do we currently believe about those behaviours and why
  • Which parts of that belief are based on evidence, and which are guesses
  • Which questions, if answered, would actually change what we do
  • How would we explain this programme to someone outside the team

If you cannot answer those questions clearly, the problem is the model, not the tooling.

An example.

A product team responsible for sign ups and trial conversion had a healthy looking experimentation programme.

Over eighteen months they had:

  • Tested dozens of pricing page layouts
  • Rotated through various social proof patterns
  • Experimented with button labels, form designs and nudges

The win rate was respectable. Local conversion metrics improved by a few percentage points.
When they tried to summarise what they had learned, the story was thin. They had:

  • A long list of specific “this version works better than that one” notes
  • No clear view of which user groups were struggling, or why
  • No shared model of how different channels and upstream expectations affected sign up behaviour

Under pressure to keep “showing impact”, the backlog tilted toward low risk cosmetic tests.

When the team stepped back and looked at sign up behaviour by segment and upstream source, they discovered that a significant share of trials that looked healthy at the funnel level never activated in the product. Most tests had been aimed at the wrong part of the problem.

What a model driven experimentation practice looks like.

A healthier pattern is simple, if not always easy.

  • There is an explicit, shared model of key behaviours and decision points.
  • Each test is linked to that model: which assumption it touches, which behaviour it addresses.
  • Results are recorded in terms of what they imply about the model, not just whether a metric moved.
  • Summaries over time focus on what the organisation now believes and why, not only on win rates.

The model will never be perfect. It will change as you learn. The point is to have something that experiments can refine.

Practical steps to reconnect experiments and models.

If you suspect your experiments have drifted, a modest reset can help.

01_ List the main behaviours that matter

For example: first purchase, repeat purchase, trial activation, plan change, cancellation.

02_ Write down what you believe now

For each behaviour, capture your working beliefs. What do you think drives it. What do you think blocks it.

03_ Tag existing tests against those beliefs

Look at the past few months of experiments. For each one, note which belief or behaviour it was meant to address.

04_ Highlight gaps and clutter

Identify behaviours with lots of tests but shallow understanding, and important behaviours with very few tests.

05_ Shape the next backlog from the gaps

Use the gaps to shape the next round of test ideas. Add research where needed, instead of forcing an experiment where there is no clear question.

This is not about rewriting your entire programme. It is about gently pointing experiments back at questions that matter.

EXAMPLE:

One organisation realised that almost all of its experiments were focused on the top of the purchase funnel, because those were the easiest journeys to test.

When they mapped tests against behaviours, it became obvious that there were almost none aimed at first use or early repeat behaviour, even though retention was the real concern. The next quarter of work shifted a portion of the backlog toward that early usage window, supported by fresh research instead of yet another homepage test.

What this means for teams under pressure.

Experimentation is often sold as a way to de risk decisions. In practice it can become another source of pressure.

When teams are judged by the number of tests they run or the size of short term gains, it is hard to protect space for model building. Yet without that space, the tests become less useful over time.

A small number of well designed experiments, linked to a clear model, will usually do more for performance and understanding than a large number of disconnected tweaks.

Where Corpus fits.

From a Corpus perspective, experiments are one part of a wider system of learning.

When we work with teams in this area, we typically:

  • Help clarify the key behaviours and decisions that matter for performance
  • Surface the assumptions that are driving current experiments, often implicitly
  • Connect upstream signals and on site journeys into a clearer model of behaviour
  • Shape backlogs so that tests and research answer questions that would actually change decisions

The goal is not to run more experiments. It is to build enough shared understanding that when you do run them, you know why, and you know what to do with the results.

Talk about how this applies in your organisation.

If a field note resonates and you want to talk about how the same patterns are showing up where you work, a conversation can help.
Typical first conversations last 45 to 60 minutes and focus on understanding your current situation and constraints.
Upstream optimisation for zero click and AI search.
Contact
[email protected]

Typical first conversations last 45 to 60 minutes and focus on your current situation, constraints and goals
We Are Corpus is a consultancy created by Abi Hough and delivered through uu3 Ltd. Registered in the UK. Company 6272638