The framework you can't test without

Test or die!

Happy Wednesday! Let’s dive right in, because this topic is a big one: TESTING!

No, not that kind of testing. But I do wonder if the kids now know what a #2 pencil is…

If you’re just starting out with new marketing initiatives, congrats! You have an endless road of high-impact tests you could run in front of you.

If you’ve been running a successful, scaled-up marketing function for a long time, congrats to you too! Even if you’ve taken all the big swings, always be testing — there will always be incremental wins you can find as macro factors and consumer preferences change over time.

Why do I need to always be testing?

Running a test on any part of your marketing funnel will help you understand if there are better ways to interact with your users and improve your performance, just by making some changes. With paid marketing costs rising, consumer attention more challenging than ever to capture, and competition at an all time high, why wouldn’t you want to squeeze more out of your traffic?

This is also the time to check your ego at the door and challenge some of your assumptions — especially if you’ve never actually proven them with data.

Take this example, using round numbers for ease:

Say your site gets 100,000 visitors a month from paid traffic. Your conversion rate on that traffic is 2%. That means 2,000 conversions a month (100,000 x .02 = 2,000).

Now let’s say you run a test on your paid landing page where you completely revamp the hero section, based on a hypothesis you have about customer behavior (more on this below). You see a 25% increase in conversion rate (HUGE)! That means your conversion rate is now 2.5% (.02 × 1.25 = .025) and you’ve generated 500 incremental conversions (100,000 x .025 = 2,500).

That means, all else equal, you’re getting 25% more results for the exact same ad dollars. If you’re still with me, you know this means you’ve decreased your CPA 25%, which is enough to make you a hero 🥇

And aside from your time and brain power, there were no hard costs associated with running this test.

What should I test, and how should I do it?

The question should really be, what can’t you test? Test your creatives and ad copy, your landing pages, your paid campaign structure, your targeting, your messaging, your site experience, your onboarding flow, your email headlines and content, your pricing, and any other element of the user experience you can.

To get started with a test, there are some key elements every test needs. Do not pass go until you have these things, otherwise you’re going to be throwing spaghetti at the wall while treading water (bad):

Thank you, AI, for helping me convey how alarming throwing spaghetti at the wall while treading water should sound

  • A hypothesis. There’s no need to get ultra-scientific, but generally your hypothesis should follow a “We believe if we change x, y will be the result” format. To level up further, position your hypothesis through the eyes of your user: “We believe customers will x if we change y, because of z evidence.” I got this advice from the talented Libby Weissman, and it’s stuck with me since — we are marketing to people, after all.

    • Don’t fall into the trap of trying to retrofit a hypothesis to a test, just because you decide you want to make a change on your site. Your hypothesis is what should drive your testing ideas, not the other way around.

  • A KPI and Do-No-Harm metrics. Running a test just to “see what happens” sets you up to fail from the start. Identify the single KPI you’re trying to impact, based on your hypothesis, as well as the metrics that you can’t harm for this test to be a success. For example, if you’re trying to increase landing page CVR (conversion rate), you don’t want to see a drop in your AOV (average order value) or RPS (revenue per session) that negates your CVR increase.

  • The ability to properly isolate, test, and track that KPI. A mechanism that allows you to run a true A/B test is ideal, but you can also get away with a before-and-after test if you reduce other variables as much as possible. You’ll also need the data and analytics infrastructure to track and understand what’s happening to your KPI.

  • A large enough audience, time horizon, and budget, if applicable, to gather good data about your test. Notice I didn’t say statistically significant data. Of course reaching stat sig is ideal, but it can also be a trap that prevents you from moving as quickly as you need to. If the results between your test and control are close, more time isn’t going to get you to stat sig, and it’s a good indicator that your test wasn’t all that impactful to begin with. Rely on directional signal when necessary, ensuring you have enough data that you can trust those signals will hold.

Ok, I have lots of ideas! Where should I start?

Ice, ice, baby.

Prioritizing and tracking all your different testing ideas is key. This is where ICE comes in:

Impact, Confidence, Effort.

For each hypothesis you want to test, you’ll rate each of these categories on a scale from 1 - 5.

  • Impact: With 5 being the highest, how large do we estimate the impact of this test to be on the department or company’s north star KPI? Often this will be something like revenue, new customers, or whatever your performance goals are based on.

  • Confidence: With 5 being the highest, how confident are we this test will be successful?

  • Effort: How much effort do we estimate this test will be to scope and implement, with 5 being the least effort? 

    • How you determine “effort” will be specific to your org. You’ll want to align on one consistent scale, but effort could mean time to implement, whether dev and / or design support will be needed, or cost associated

Now let’s say I have 2 tests in my backlog.

Test 1 gets:

  • A 4 for impact (this could be BIG)

  • A 3 for confidence (there’s about a 50% chance it’ll be successful)

  • A 1 for effort (it’s going to require a lot of dev resources to implement)

  • 4 + 3 + 1 = 8

Test 2 gets:

  • A 3 for impact (it could drive a nice amount of revenue)

  • A 2 for confidence (there’s about a 30% chance it’ll be successful)

  • A 5 for effort (I can get this implemented today, with no additional resources needed)

  • 3 + 2 + 5 = 10. This test gets prioritized above test 1.

Track all of your testing ideas in a sheet, assign an ICE framework to each line, then sort highest to lowest. There’s your testing roadmap.

Here’s what this looks like in practice (steal this framework):

Subscribe to keep reading

This content is free, but you must be subscribed to Growth Therapy to continue reading.

Already a subscriber?Sign In.Not now