Should You Apply A/B Testing in Change Management?
Written by Ankur Shah
One of my favorite tactics for validating a change management approach is A/B testing. Whether it’s an appropriate tactic does depend on the organization’s size and specifics of the change, but when it’s applied effectively in the right setting, A/B testing can result in significant savings and a better change management approach.
A/B testing is basically a way to compare two options and analyze the results, so you can choose the most effective option for your needs. Like marketers or solution developers who test different variations of an ad campaign or software solution before implementing one, testing variations of our change management plans can be a useful way to identify the optimal path to high levels of adoption.
A/B Testing Elements in Change Management
To execute this approach, a few critical elements should exist:
1. Clear methods to measure adoption, utilization and proficiency
If we are going to test one approach against another, we need quantitative measures we can use to compare approach A against approach B. This data-driven method enables the organization to identify the superior approach.
2. Desire from the sponsor and project team to test the change strategy and wait for the data
Change measures happen over time, and we need to let each approach play out over a certain period to assess results thoroughly. While preliminary data is helpful (e.g., we’re seeing an immediate improvement in Awareness and Desire in approach A), we don’t want to jump to conclusions before the final results are available.
3. Willingness from those receiving the different variations to offer timely feedback
In addition to hard data, we value qualitative feedback and want to set up mechanisms to receive it from participants in the trial. Although we are testing two solutions, we may ultimately land on a hybrid solution. Feedback informs the decision about which change management solution we want to roll out.
4. Similarly impacted, homogeneous groups
A/B testing is easier to apply in larger organizations where the impacted people and groups are similar and geographically dispersed. However, the approach can be used in organizations of all sizes. We often test approaches for groups that are impacted by change in a similar way. These groups tend to consist of front-line employee populations—such as call center agents, sales representatives, IT support desk teams, etc.—whose job roles are also consistent across the group.
Rationale for A/B Testing
Often in large, complex organizations, there is strong pushback from leaders on a full change strategy, even for large transformational changes with high stakes. Why? Because it’s very expensive to execute these plans. Taking people “off the line” to participate in training, read communications, and sit in meetings reduces productivity. If an employee isn’t fulfilling their primary job role, customers are not being served, products are not being developed, and sales are not being made.
But you can demonstrate to leaders that A/B testing is a good investment. By running an experiment, we essentially test two different paths to adoption. Perhaps comparing a low-cost strategy to a high-cost strategy would be helpful. Or a test that compares providing employees with job aids versus in-person training. We can even measure employee satisfaction by approach as a data point for consideration. A/B testing could prove fruitful with different sponsor messages too. Will an e-mail campaign work, or do we need managers to communicate during weekly meetings?
The key is to understand and measure how quickly and effectively one method achieves adoption over the other. For example, does the low-cost strategy get us close enough to adoption goals that the savings outweigh the benefits of the high-cost strategy?
Trade-offs of A/B Testing
In addition to employee time, A/B testing takes time for teams to execute. You need to complete the experiment with both variations and then commit to assessing the adoption rates. It also requires the solution to be nearly complete. And the team must be willing to hold off on deployment while you test the strategies. A/B testing also produces some duplication of work—some of which you know you will throw away in the end. This means you must thoroughly evaluate whether A/B testing makes sense before you get started.
Here’s a list of helpful questions to ask yourself:
- How large is the transformation effort?
- How many people/groups are impacted?
- Are the groups and impacts similar or different?
- How many locations are impacted?
- What are the costs and risks of poorly managed change?
- What is the level of resistance to change?
- How much time do you have before go-live?
- Is the sponsor and project team supportive of A/B testing?
A table can be used to evaluate approaches. If you are testing multiple approaches, you can add columns and metrics. Depending on the level of data, these tables can be displayed using statistical analysis packages or visual tools such as Tableau.
|Measure||Approach A (High Cost)||Approach B
|Speed to adoption||11 days||17 days|
|Cost per user adoption||$27||$12|
|Training time per user||1 hour||No formal training
(job aid provided)
Although this simplified example illustrates that Approach A gets higher rates of adoption, proficiency and utilization, Approach B is significantly less costly per user. The rates in Approach B could realistically offer adequate adoption and usage for your goals. Working with your sponsor and project team, evaluating these trade-offs will help inform which approach you want to take. Additionally, this could help identify a hybrid solution that may work best.
A/B Testing Offers Benefits With Change
Why would anyone do this? In larger organizations, time is one of the most expensive line items on their books. If we can shave a few hours off adoption time, we can save a large organization significant costs. If we take a better path to adoption, we’re also going to see a benefit to employees. And helping people through change is the best reason of all.