Connect with us

Business

Is AI killing A/B split testing?

Marketers use A/B split testing to help them in decision making, but artificial intelligence can help fine-tune this method. Here’s how.

Published

on

A/B split testing involves showing two groups of people different versions of an app or webpage, then investigating to see which version got a better response. Marketers depend on A/B testing to guide their decisions about changes that are maximally resonant with audiences.

Artificial intelligence (AI) can measure those responses, too. As a result, people are starting to wonder if A/B split testing is dead because AI replaced it. In this article, we’ll go over just how valuable both are in marketing and why professionals shouldn’t write off A/B testing just yet.

A/B testing has some known shortcomings

Although A/B testing is useful in some scenarios, it has numerous downsides. For example, only a small segment of the audience participates in the test, and marketers assume the results represent the group at large. However, that’s not always the case.

Additionally, it’s difficult to pinpoint whether people responded differently because of the variations in the website or app or because of an unknown variable not accounted for during testing.

Despite these potential disadvantages, websites can experience significant improvements after running A/B split tests and making changes accordingly. A 2015 case study revealed that A Hume, a clothing retailer operating in the United Kingdom, performed several A/B tests. It noticed numerous benefits, ranging from increased click-through rates to more items added to baskets.

1. AI could help marketers improve A/B testing

One advantage of AI over A/B testing alone is that an AI platform can analyze several variables at once instead of just a few or only one. Then, testing becomes more effective and helps marketers see audience preferences they may have otherwise missed.

Cosabella is an intimate attire company that used the Sentient Ascend AI platform for website testing. During a seven-week test, the company depended on the technology to test four characteristics — like headers and image size — with a total of 15 changes implemented across those components. The platform enabled Cosabella to examine the effects of 160 different designs, too.

Cosabella saw a 38 percent increase in conversions. More importantly, the company realized the audience liked pink buttons more than black ones. Additionally, people had more favorable responses to messaging about Cosabella’s culture compared to its pricing.

Traditional A/B testing could have generated some of those findings, but it would have taken longer than seven weeks. AI can do some things better than humans, such as diagnosing illnesses and analyzing images. The improvements are possible because AI platforms sort through huge amounts of data and draw conclusions more quickly than people can.

Artificial intelligence can help measure responses when it comes to A/B split testing. (Source)

2. AI delivers personalized content based on user behaviors

When businesses engage in A/B split testing, the goal is often to see if small changes trigger increases in positive behaviors. However, AI goes about things differently by seeing how people behave first, then delivering content to match those traits.

Chime, a digital bank, saw results with AI during a campaign to increase user signups. First, the team created various versions of the website that included differences in features and phrases. Then, it put the AI to work to find the best-performing websites for segments of the audience.

Finally, the AI technology sent people to specific websites based on their behavior or other characteristics. The analysis found some sites performed better with viewers in particular states or those on mobile devices. The time of day for a website visit was another variable.

Ultimately, AI helped Chime’s marketing team test 21 ideas and 216 website versions in only three months. Using A/B split testing without AI would have required nine years of work to get the same outcome.

3. A/B tests still have value

The rise of AI-driven tests doesn’t mean A/B split tests are obsolete. In the Cosabella case, the marketing team wanted to test several variables. Because of the high level of variation, AI testing made sense.

However, A/B testing can work well for confirming a hypothesis. A 2013 Netflix case study involved the streaming media company carrying out A/B testing to see whether allowing nonsubscribers to browse the title selection would increase new signups. The idea came from user feedback, and Netflix’s design team thought it seemed like a sensible change.

The company ran five tests that compared the worthiness of a landing page that allowed content browsing against the original one that didn’t. The results surprised Netflix, since the new version that showed content to browse did not beat the original version in any of the tests.

This example shows why A/B tests should not be cast aside. At smaller companies, AI technology may be too expensive an investment, especially for companies that primarily profit from in-store traffic rather than website or app purchases. In those cases, traditional A/B tests are worth considering.

4. AI complements A/B split testing

Although AI can reach conclusions more efficiently with larger quantities of data than A/B testing, it’s not always the superior choice. There are useful ways to employ both methods.


DISCLAIMER: This article expresses my own ideas and opinions. Any information I have shared are from sources that I believe to be reliable and accurate. I did not receive any financial compensation for writing this post, nor do I own any shares in any company I’ve mentioned. I encourage any reader to do their own diligent research first before making any investment decisions.

Kayla Matthews is a technology blogger who regularly contributes to Inc.com, MakeUseOf and The Gadget Flow. Follow Kayla on Twitter or check out her technology blog, Productivity Bytes.