A/B Tests are a great way to accurately measure the impact of your adoption efforts with Chameleon. You can create and compare different versions of the same Experience and decide which is more impactful for your goals.
From the Dashboard, you can run a Control Group (Chameleon vs. nothing) or A/B Test (different variations of the same Tours). But you can also test the other Experiences, manually. We recommend starting with a Control Group test and iterating until your Experience is effective before testing Experience variations.
Explore an interactive demo for A/B Testing 🧪
Availability & Usage
📩 Contact to discuss your plan needs
When creating your Tour you can choose to run a Control Test for your selected audience and exclude a percentage of users who should not see your Tour. This will enable you to assess the effectiveness of your Tour and understand how it impacts your users' goals.
Build your Tour as usual and go to the Testing panel in your Dashboard, to turn on Control Group testing. Next, set the proportion of users that should be part of the control group and decide when your experiment ends.
👉 For Tours with multiple variants, you can also pick the variant that should be displayed to your users, from the same Testing panel.
Every user identified by Chameleon is automatically assigned a random number (
Testing ID property) with a value between 0 and 100. This value is persistent for each user and can be leveraged for targeting and experimentation.
To select a control group, Chameleon randomly chooses a number between 0-100 as the start of the range for the group. The size of the group will determine the end range for the group (including going back to 0 after 100.) The corresponding users within this range will form the group.
E.g. if you choose a 20% control group for a Tour, the Testing IDs for the group could be 87.3 to 7.3. If you do this for another Tour, the group may be 12.9-32.9.
See in your Dashboard the results of all Experiments on any of the Tours you test. You can change your Tour Goal after you start an Experiment, and Chameleon will simply start a different Experiment for you to keep the results clear.
An event will be logged whenever a user is first identified to be within the Target Audience and is liable to see this Tour. This event* -- "Chameleon Experiment entered" -- will be available within all your connected analytics integrations.
Within Mixpanel, the event name is "Experiment Started" to better match Mixpanel's experimentation analysis framework. 👉 See our integration guide to how to identify different Tour Variants in Mixpanel.
For this event, the following properties will also be logged:
The "Group" property values are either:
Control (Out) -- user is part of the control group and will not see the Tour
Test (In) -- user is part of the test group and can see the Tour
Other events (e.g. "Tour started") will also be logged as normal once a user starts interacting with your Tour. You will see all these within your analytics platform (e.g. Amplitude, Heap, Mixpanel, Google Analytics) and can use this to further analyze the conversion or relative impact of your Chameleon experiment.
You can also perform multi-variate testing on your Tours and show different versions to users to determine which version performs better.
Build your Tour as usual, and from the Dashboard's Build Steps panel, click the 'Create variant' button. Chameleon will duplicate your existing Tour and you'll be able to adjust each variant further.
With each variant, you'll be able to edit the Steps and all configurations as usual, including reordering, or deleting Steps in the Dashboard. In the Builder, you'll be able to switch between each variant from the top bar to make adjustments or preview how each displays.
You can create variants with a different number of Steps and configurations. Once you have your content and style set for each variant, go to the Test panel in the Dashboard, to define your A/B test. You can:
set a percentage of users that should see each variant
pick how the experiment should end -- if auto (Chameleon stops the experiment) or manually finish (you stop the experiment)
Publish your Tour live next and in the Analytics tab, you'll see a section dedicated to your experiments where you can review how each variant is performing.
Here too, you can switch between different Experiments to review the results.
Review your results in the Dashboard and send them to any connected analytics integrations to better understand which is more successful and manages to drive users toward achieving their goals.
With each Experiment you start Chameleon will analyze how each variant is performing and will rate it with a 'Confidence score'. This takes into account how many users engaged with your Tour during your Experiment.
The 'Confidence score' is a good way to understand what you need to work on to improve either the quality of your experiment (e.g. audience size) or the configurations of your Tours (e.g. if not enough users complete the Tour or meet your goal)
You can also manually test different Experience variations. You'll have to define your Test Segments and assign them to the appropriate Experience version when creating your variations. Here's how to do it 👇
To create a Test Group, simply add an extra "sampling filter" to your Segment that will select a random sample of users of the desired size. You can still target users based on other conditions, such as user properties, events, data sources, etc.
To add the sampling filter:
Select Default properties as the type of filter
Select Testing ID in the next dropdown
Use more than or less than to define the range of users
Set the boundary number for this Testing ID value.
In the above example, users that have a Testing ID value between 50 and 100 would be targeted. This would constitute 50% of the users within the group defined by the other segment filters.
To target 10% of users, you could use either:
Testing ID more than 90
Testing ID less than 10
You can use the same filter configuration in another Segment (by re-creating the filter) to target the same user group. This enables you to run multiple A/B tests on the same user group.
To test two variations of the same Experience:
Create the control version of the Experience, including a Segment, using the sampling filter above.
Duplicate the control Experience, update the Experience name (using the variant name/label), and then re-create the Segment. This time use the opposite sampling filter so that you're targeting the alternative user group.
Set both Experiences live.