• Growth Therapy
  • Posts
  • When to call an A/B test if your results aren't clear

When to call an A/B test if your results aren't clear

3 steps to help you make a decision

Forward this email to someone with lots on their testing roadmap

Burning growth marketing questions? Let’s chat — book office hours here.

Experimentation as part of your growth strategy is all fun and games (not to mention critical) until you end up with some really unclear results.

Maybe that landing page headline test you were SURE would increase conversion is looking flat to the control, or the new creative you cycled in that your team worked on for a week isn’t getting any traction. What the heck do you do next?!

Great marketers need to be data-driven, but they can’t be exclusively data driven, especially in a dilemma like this.

Here are three steps to help you triangulate how to proceed in a situation where your results aren’t crystal clear. If you’re super used to relying exclusively on data, each step will feel a bit more uncomfortable, but I promise these are all important skills to hone.

😀 The most comfortable: Use data first, and practice data honesty

Even if it turns out data isn’t the only way you get to an answer, it should be the first line of defense.

This means making sure your experiment is set up for success from an analysis perspective from the jump. You should have a hypothesis as to the metric you’re going to impact, and a pretty solid idea how you’re going to understand that impact, before you make changes.

Aside from knowing how you’re going to be able to determine success, you should also keep in mind the concept of data honesty: looking at the whole picture, without any bias.

It’s easy to want the tests we come up with to be winners, and to look at the data a certain way to try to confirm our hypotheses. This phenomenon tends to happen more frequently on smaller, scrappier marketing teams, where the same people devising the testing roadmap are working to execute tests and analyze results. Zooming out here is key and allows you to review the full impact of your results without inherent bias for wanting your hard work to pay off.

Make sure you can understand how your test impacts all the relevant points in your funnel, and that you can clearly convey the “so what,” so your first line of defense can be a clear reading of the data at your disposal.

🙂 Somewhat less comfortable: Look for directional signals

If you’ve started with the data-first approach above, but haven’t reached statistical significance or otherwise don’t have a clear result, giving your test more time is probably not the answer. The “right amount of time” for a test to run is entirely dependent on your budget and traffic levels, but if you’re totally unsure, aim for two weeks and see what volume looks like.

If two weeks, or whatever amount of time you’ve dedicated to your test, has garnered a good enough volume of data, but not a clear answer, look for directional signals in the data rather than definitive results. If you’re confident that you have enough data to normalize for any noise, and test results are fairly close to your control, more time isn’t going to get you closer to stat sig.

Instead, understanding what sorts of directional insights you can gather allows you to remain nimble. One example:

  • You run a creative test meant to drive a CVR improvement

  • The test has a much better CTR than the control, but a virtually flat CVR

  • The test may still be directionally better than the control, presuming the CTR increase offsets costs, and you have other levers to impact CVR

Another simplified example, for the visual learners:

Super simplified landing page test data.

It looks like the test performed slightly worse, but these results are close, and not stat sig. Run your results through a stat sig calculator like this one to see whether or not you can rely on these results to hold. If not, you can say that directionally, there’s no lift from this test, and if the changes you made in your test version aren’t necessary, you can safely scrap them and move on.

Keep in mind that while you may not get a clear result with every test, you may not be taking big enough swings with your testing if none of your tests give you a definitive answer.

A good exercise to clear this “testing block,” which often happens when you’ve been so close to your product for a long time, is to:

Then, go back to the drawing board and design your tests to get back to to these psychological basics.

😬 Uncomfy, but so important: Don’t discount logic and intuition

Most growth marketers are so used to relying on data that “going with your gut” can feel like sacrilege. But once you’ve analyzed all the data and looked for directional insights without a clear answer, this is your final step.

Presuming you know your product, audience, and the dynamics of your business well, you should feel comfortable using logic and intuition to make decisions that lean more qualitative. Sometimes you just don’t have the data to support the results of a change that was made, and NOT making a decision on how to proceed is the worst possible outcome.

Let’s say you run an A/B test where you add some social proof to a landing page that doesn’t disrupt your hero or main CTA, and find that conversion rate of your test is slightly lower than your control. Logically, adding social proof where there was none should only help conversion. Perhaps one conclusion is there’s a more impactful way to convey this information. At the same time, you can also be confident in your own judgment that it’s unlikely that the addition of social proof is the reason for a slight CVR decrease, and what you’re noticing is more coincidental correlation than causation.

It’s also possible that while your top-of-funnel CVR declines slightly, your LTV could increase down the line, but it’s unrealistic to wait around to prove that data while your test continues running in the background.

When you get comfortable making decisions based on logic in the absence of clear data, not only will you be able to move more swiftly through your testing roadmap, but you’ll become a much more confident marketer.

✨ One marketing thing: Meta is no longer allowing detailed targeting exclusions, and will stop serving ad sets using these as of Jan 2025.

✨ One fun thing: I’m almost always in the “if I want dessert I’m going to eat a real dessert” camp, but this “healthier” chocolate mousse is changing all that…

Was this message forwarded to you? Need a little Growth Therapy? Sign up below 👇

Questions? Comments? Topic requests? Just hit reply ↩