6 Mistakes People Make While Running Split Tests, And How to Avoid Them

According to Statista Mobile traffic has surpassed desktop. Unfortunately, many people make the mistake of choosing not to split test with mobile traffic.
While test frequency is a mark of CRO program maturity … quality should never be compromised for the sake of quantity. The happy balance is somewhere in the middle.
The expert marketers at zipjob advise people to collect useful statistical data, evaluate the outcome, and then run an informed test based on the information collected.

Users who use mobile phones are inherently more impatient in their browsing habits. They may not have the patience of scrolling through long form sales pages or home pages choc full of links.
Traffic coming to your site is spread randomly between the two pages and the performance of each page is tracked and analyzed. The results are then compared to determine the winner.
There are plenty of best practices and case studies out there. And they serve as inspiration for testers.
But another mistake marketers make is to anticipate huge results from small changes.
Just avoid these 6 pitfalls and you should be fine!
Once the test is done, you can identify which version drove visitors to take action.

What is Split Testing and How does it Work?

This way, the two versions are shown to visitors throughout the testing period and this increases the likelihood of getting relevant results.
ALSO, A TRUTH: Not all businesses are getting the most out of it. A study has shown that only 12.5% of split tests yield better results.
Given that different days of the week have different traffic quality and the constant change in economic climate amongst people, running before and after tests might not give you better results.
An example of a change that produced great results is when Crazy Egg changed their homepage to a lengthy sales letter. After this change delivered a huge win, they tested their call-to-action buttons and made several other changes to increase conversions.
In fact, many companies that run split tests for the first time complain that they don’t get the anticipated lift from their efforts.
There are plenty of split testing tools out there – Convert Experiences, for example is a platform that makes split testing easy and cost effective.

Now That You Know What Split Tests are All About, Let’s Look at Some Common Mistakes that Derail Efforts

1. Pre-Testing and After-Testing

Split testing is a conversion rate optimization practice where marketers compare two completely different versions of a web page to identify which one provides better results according to explicit metrics.
While 64 percent of companies agree that split testing testing is easy to execute, 7 percent say it’s not easy to perform the tests (source: finance online).
If you have started running split tests on your own, congratulations! That will help you make informed improvements that can lead to a huge increase in revenue.
While this may seem like a no-brainer and the phrase “A/B testing” is starting to catch on, there is a lot of misconception around the concept of split testing, how it differs from its more popular cousin and the pitfalls that businesses should expect to find (and avoid) on the journey to growth with conversion rate optimization.
This applies to other classes of experimentation like A/B testing as well.
The aim of running a split test on a website is to improve conversions.

2. Implementing Other People’s Ideas

According to Edu birdie sometimes modifying the headline or changing the color of the call-to-action button can bring better results but not always.
A good rule of thumb is to test radical changes to ascertain how they impact conversions. After you have identified how a change increases conversions, you can then keep on adjusting more till you see great results.
The whole purpose of a split test is to gauge the general preference of your audience. So, use the flexibility to test radically different pages and versions.
You could have tons of people visiting your website but it’s not recommended that you run several tests within a short time frame.
You can also run a split test to compare your landing page and homepage to identify which one better converts visitors in the context of a quantifiable goal. Like sign-ups. Or purchases.

  • You defeat the purpose of testing, which is to remove bias for your specific case and move forward with a confident data backed decision. In essence, you don’t “test”. You might as well replicate what the other site has done.
  • You miss out on exploring low hanging fruits for your business. The site you are emulating may have pitched two landing pages against each other. But in your case, maybe pitching your landing page against your home page makes better sense given how your traffic finds you. (Look at your analytics set up to determine this!)

3. Anticipating Great Results from Minor Changes

Let’s get rolling.
After all, when will you be able to collect statistical data, analyze it, evaluate the results, and run another test? You might end up not measuring results accurately and worse of all – not knowing what decision to make.

As seen in the papers owl reviews almost all content management platforms offer split testing features. Nevertheless, if you want to run a serious split test, you would get great results and better experience when you test with a tool designed specifically for the job.

Yes, you read that right. The brand that has become a verb fully endorses data backed decision making over gut instinct.
Many people make the mistake of assuming that add-ons or plugins are enough to run serious split tests. This is rarely ever the case.
When you run pre-tests and after-tests you will not be able to ascertain how relevant the results are and this may not provide you with conclusive data about the best version.
This change in traffic could hurt conversion rates and that’s why it is recommended to avoid running before and after tests but to test two or more versions simultaneously.

4. Using the Wrong Tool

This is because of two reasons:
Split test your regular page versus a version made especially for mobile traffic. This may just add lots of conversion percentage points to your micro and macro goals.
This one is a no-brainer but worth iterating.

5. Ignoring Mobile Traffic

Well, you don’t have to make huge changes to see great results. You need to make reasonable changes backed by data.
For instance, one webpage could have a call-to-action above the fold and another one would have a call-to-action below the fold.
When you fail, remember the words of Oli Gardner, renowned CRO expert who once said “The best thing about a failed (A/B) test is that it kicks you in the crotch, reddens your cheeks, and make you try harder next time

6. Frenzied Testing

That means different versions are tested for different periods.
Once the trend is clear, A/B testing and multivariate testing can help zero in on the granular aspects of what works better for your traffic.
But at no point of time should the work or the result of another business inspire your split tests.
This is perhaps the common mistake businesses make while running a split test. To get better results with your tests, you need to test two versions without changing them during the course of the experiment. With pre-testing and after-testing, marketers measure conversions for a given period, make adjustments and then measure conversions again for a given period.

Final Thoughts

Well, getting great results with your split tests isn’t a walk in the park. You must know the most common mistakes marketers make and how to avoid them, and that’s exactly what we will talk about in this article.
In 2011 Google executed more than 7,000 split tests on their algorithm!
Happy split testing!
TRUTH: Split testing is a popular form of conversion rate optimization testing for marketers these days.