Tag Archives: Split Testing

The Dark Side of A/B Testing: How to Test Your Site with a Bazooka

Testing a website is the most popular passing time for digital analytics experts. However, testing the right thing is harder than ever.

Website optimisation relies heavily on A/B or multivariate testing with digital analytics tools. However, if it comes down to testing, most of the experts test only minor comparisons on a site. But testing should go forward only if you test highly tangible differences; this is where you can learn something and open new frontiers. So instead of burning through your site with a bunch of matchsticks, use a bazooka instead and start some real flames.

Test What Matters and Test Less

Real productivity comes when you leave your comfort zone

You have read this many times we guess; your life starts after you leave your comfort zone. Well, that is the same with marketing. Technology made us busier and helped to boost our productivity, but it also killed a lot in the details. Meetings, chat, endless notifications, these are the time killers, and we haven’t even mentioned social media. Being busy is nothing, being productive is something. If you get lost quickly in the little details, non-productive testing comes up in the digital analytics space, and you find yourself in the situation where you test two ad copies with minor changes between and drawing conclusions where there is none, only pure randomness.

Test less and test what matters

Now that you have stayed away from little details, you can focus on the real issues. Test what matters only, and test way-way less. By stopped paying attention to things that do not matter in the details, you can focus on testing the important stuff.

Testing, its name is an experiment. And most of the scientific experiments are dead ends for 99% of their time, producing non-accurate results. In 1%, the science team has a eureka moment, usually when they slip away from the original focus of the experiment. This is the same with digital analytics testing when you test small changes on a site; you usually end of testing random outcomes. But when you test with tremendous tangible differences, you might will learn something on your business and customers.

Be Patient with your RPG

Science is a game of patience and marketers are not scientists. There was a study with an investment firm; they concluded that their most successful investors are either dead or non-active. The reason for this is because live and active traders are impatient and burn their positions with their little actions. It’s the same with testing. So, when you set up a testing experiment, you have your goals, you made a tangible difference to the chamber and started the testing.

Don’t even look at the test until you have reached your goal. Don’t change the variables. Don’t change the tested markers. Don’t even open the testing chamber. Be patient. Draw conclusions only when the testing has finished, and you have all your figures.

Remember: testing needs a perfectly clean environment

Most experts fail in their tests because the environment they are testing in is imperfect. The site that they are testing has huge usability issues. The testing material isn’t ready yet. There is nothing to be tested. The traffic is not there so the figures are so small that drawing conclusions would be simply just guessing in the cloud.

Be ready first and sort your stuff out. Only test when you need to develop even further. Further means, you already have a pretty solid ground: a good site, with a high number of visits and only minor usability issues.

All in all, these are the simple steps you need to follow when you do testing:

  • Test with an RPG: test only tangible vast differences
  • Test only when you have scientist mindset: be patient and be ready to fail
  • Test only in clean environment: your lab/site should be stable and established

Learn more about A/B Testing and Multivariate Testing. Visit the Insights Academy Today!

Book an on-demand Q&A with us where we answer all questions you may have about this topic.

Get your FREE copy of our 2017 digital trends ebook and SUBSCRIBE to our newsletter!

How to Run an A/B Test with Google Analytics

We’ve covered the basic introduction to A/B Testing and why it’s important to your business as well as mistakes commonly made in A/B testing in the first two articles. Today, we’ll run you through the process of setting up an A/B Testing experiment.

There are various A/B Testing tools available if you search around, most of which are paid software. But since not everyone is ready to shell out money just to test them out, we’ll highlight one of the known free tools around which is Google Analytics Content Experiments.

How to Run an A/B Test with Google Analytics

What are Content Experiments?

To explain better, here’s how Google Analytics describes the Content Experiments process.

Content Experiments uses a somewhat different approach than standard A/B and multivariate testing. Content Experiments uses an A/B/N model. You’re not testing just two versions of a page as in A/B testing, and you’re not testing various combinations of components on a single page as in multivariate testing. Instead, you are testing up to 10 full versions of a single page, each delivered to users from a separate URL.

While it’s mentioned that you can test up to 10 full variations, we would still recommend to keep it around 2-3 only.

Before we lay out the steps to setting up an A/B Test experiment, be sure that you have you have properly set up your Google Analytics account and that you have your goals defined. Need help? Feel free to connect with us!


What to Prepare Before Setting Up the Experiment

  • Test Objective
  • Original page
  • Variation
  • Analytics tracking code properly added to the pages

How to Create a New Experiment



  1. Sign into your Google Analytics account
  2. Choose the website you wish to test
  3. Go to Reporting tab
  4. Go to Behavior
  5. Click Experiments

You will be asked to set following information:



  1. Name for this experiment
  2. Objective for this experiment
  3. Percentage of traffic to experiment
  4. Distribute traffic evenly across all variants
  5. Set a minimum time for the experiment to run
  6. Set the confidence threshold

After defining the objectives, you will be redirected here:


You will be asked to add the URL of the original page and the variations. Page preview will be shown as thumbnails to help you make sure you set the right URLs.

After clicking ‘Next’, you will be prompted to set up your experimental code as shown below.

google-analytics-experimental code

This section will let you manually implement the code to run the test, or you will be given an option to email the webmaster to run the test.


After clicking ‘Next’, you will be able to review and validate the experiment code. Just click on ‘Start Experiment’ and you’re A/B Test will begin to run.

Feel free to Connect with us if you wish to know how we can help you with your A/B Testing experiments!

Get your FREE copy of our 2017 digital trends ebook and SUBSCRIBE to our newsletter!

Top A/B Testing Mistakes

Making blind changes to your website can potentially deliver a negative impact to your business. What you think is a good idea may do more damage than good which is why A/B Testing is always the right thing to do before going live with updates and changes.

cTop A/B Testing Mistakes

Nowadays, almost anyone can perform A/B Testing made accessible by user-friendly tools in the market. But just because it’s easy to do does not mean that setting up is all it takes.

Today, we’ll be discussing some of the top A/B Testing mistakes you should avoid.

1. Testing Without a Hypothesis

A/B Testing should not start with just setting up the actual test without establishing a hypothesis. It should be based on real data and careful analysis and not some random idea.

2. Ending a Test Too Early

Ending your test before it reaches statistical significance is another common mistake in A/B Testing. Statistical significance is when one version proves to be significantly better than the other. And no, a test does not just happen in a day. Experts advice to conduct it within a 7-day period with a 95% difference.

3. Implementing Someone Else’s Results

Blindly copying and implementing someone else’s results does not automatically mean that you will get the same results. What worked for them may not exactly work for you. Whether or not a change will work for you is something you will never know if you do not run the test yourself.

4. Testing with Too Many Variables

While it’s possible to do this, you may have trouble determining which of the variables are improving or decreasing conversions.

5. Running Too Many Tests

Once you get a hang of A/B Testing, you might find yourself enjoying the process. But beware of running too many tests once you see a lot of opportunities for improvement thinking it will save you time. This can potentially affect results.

Connect with us if you’re looking to learn more about A/B Testing and would like to know how it can help your business.

Get your FREE copy of our 2017 digital trends ebook and SUBSCRIBE to our newsletter!

A/B Testing and Why You Need It

Most often than not, for various reasons, marketers rely on the best practices shared all over the Internet for Conversion Rate Optimisation (CRO) and not doing tests themselves. While that’s okay and all; it’s still much better to have a first-hand data on what works, what doesn’t and what works better.

This is where A/B testing comes in.

A/B Testing and Why You Need It


A/B testing, sometimes called Split Testing, is a controlled experiment where two versions of a web page design, an ad or a newsletter are compared and tested for performance. This is a great testing tool for optimisation whether it’s for PPC text,call-to-action buttons, colour scheme, fonts, font size, UI, among others.

There are many A/B Testing tools out in the market ranging from free to paid versions with various features.


A/B Testing is done by comparing two or more variations of a page, ad, newsletter, etc. through an experiment to determine which variation performs better in terms of conversion goal.



  1. A/B testing reveals your user’s behaviour and reaction to your marketing. This enables you to change your layout, content or design according to how the users respond and eventually convert.
  2. A/B testing before undergoing redesigning saves you time and effort you could have allotted to optimising your conversion rate.
  3. A/B testing gives you actionable data from comparisons, test and collected data through continuous testing.
  4. Designing, redesigning and arranging the layout is better done with data-driven results from A/B testing instead of grasping at straws with guesswork.
  5. Doing the test yourself instead of following somebody else’s A/B Test results lets you tailor your marketing according to your brand, marketing needs and goals.

Connect with Walter Analytics if you’re keen to know more about A/B Testing and how it can improve your marketing efforts.

Get your FREE copy of our 2017 digital trends ebook and SUBSCRIBE to our newsletter!