This article reviews the concept of variables and variants, strategies for planning them in a project, and implementing them in a digital experience.
Table of Contents
- Variable Groups
- How variables and variants are used in testing
- Best practices for planning variables and variants
- Testing alternate experiences using a redirect variant
- Implementing variants
Optimization ideas for your digital experience can be expressed in two parts: Variables and their Variants. Adding ideas to a project is the process of describing variables and the variants that should be tested in each.
A variable represents any aspect of the digital experience that can vary, or change.
When you’re trying to decide what kind of positive change you might make on a website, you could ask yourself: “Which aspects of this page might be able to affect user performance in a positive way?”
Examples of design aspects that can vary include:
- Hero Images
- Location of a button
- Button CTA text
- Page color theme
- A link destination
- Presences of a content widget
- Recommendation algorithm parameters
- Number of pages in the Checkout flow
- Payment options
Variables are like A/B tests
Functionally, variables are like individual A/B tests; they include a control variant used to compare the performance of one or more non-control variants.
A variant represents a distinct variation of the aspect described in a variable.
Each variable includes 1 control variant and any number of non-control variants.
- The control variant represents the current digital experience and is used to compare the performance of non-control variants in the same variable.
- A non-control variant represents an alternative design or functionality to the control experience.
For example, the variants used to test alternate Checkout button text might include:
- (Control) “Checkout”
- “Continue to Checkout”
- “Checkout Now”
- “Secure Checkout”
Variable groups are simply a means of grouping related variables together.
Variable groups can be targeted to specific segments of your project audience so that all variants in them are only shown to visitors in that segment, for example:
- Group by page they appear on (e.g. Homepage, PDP, Cart, etc)
- Group by personalization criteria (e.g. mobile, desktop, and location based variables)
In the Evolv AI Manager, variable groups can support multiple levels of hierarchy. This can be helpful in organizing variables not only by page, but by distinct areas of the customer experience, like Upper Funnel, Lower Funnel, Checkout, etc.
In the Evolv AI Web Editor, variable groups are used to group variables by the pages of the digital experience they will appear in. These groups are referred to as "(Page) Contexts".
When building variants, common code shared across variables can be added at the variable group level and applied to all variables inside it.
To learn more about targeting pages and states in your digital experience, read About Contexts.
To learn more about variable targeting, read Targeting audiences with context attributes.
How variables and variants are used in testing
In a project that uses Continuous Optimization, the system combines variants from each variable to create new experiences, or "combinations". The system uses the performance of each combination to calculate the expected performance of individual variants and decide which combinations of variants to deactivate and which new combinations should be generated.
In an A/B Test project, you can create one or more variables, each with its own variants. However, the system will only test the variants independently.
Best practices for planning variables and variants
Focus on individual aspects of the digital experience
Avoiding conflicts between variants
Focusing on individual - or independent, discrete - aspects of the customer experience allows Strategists to recognize potential conflicts between variants across multiple variables.
These aspects can be categorized in three ways:
Testing the presence of interactive elements and content can tell us whether they have an impact at all or if users are skipping over it anyway.
For example, testing the presence of a new button or widget or hiding a banner to remove distractions.
Once we know that the presence of an action or content has a positive impact, we can test their location to ensure they are where users expect them or should encounter them at the right time.
For example, testing the location of the Add to Cart button above or below the fold or in various locations above the fold.
Every aspect of the customer experience can be described by its qualities, such as size, shape, color, content, animation, etc.
For example, the text of a headline or call to action, the style of an image, the contrast of a button, or the theme of a widget.
Examples of conflicting variants include:
- Presence Conflicts – Testing the presence of an action or content while also testing the location or quality of them. In the examples below, you want to avoid testing Variable 1 with Variable 2 or 3 since an experience testing variant 1.2 and 2.2 together would not be able to apply variant 2.2.
- Location Conflicts – Testing the relocation of multiple things in the same location can affect the order in which they appear, which changes their location. In the example below, if we tested a variable that relocated another element in the same place as the buttons in variant 3.2, then the location of the buttons relative to the headline would change when the other variant was present.
- Color Conflicts – Testing the text color of a button while also testing its background color can lead to combinations in which the text can’t be read. For example, white text on a white background.
Resolving conflicting variants
There are couple of ways to handle variants from multiple variables combine and break the design or negate their intended effects.
- Combine variables – While it's recommended to test one aspect of the design in each variable, combining multiple design changes into a single variant can help you manage which design aspects are changed at the same time. For example, testing button text and background color in an individual variant. This strategy can help you learn about their combined impact, but it won't tell you which of the changes had the real impact, if any.
- Test one variable before the other – By testing one variable before the other, you can use the insights from the first test to inform what you test later. It may turn out that the first variant achieves the desired impact, so that you don't have to test the other.
Understanding which changes actually make a difference
When a single variant changes multiple aspects of the customer experience, the variant’s performance can’t tell you which change made the difference, only that they had an impact together. This can impact the ability to apply these changes in other customer experiences with high confidence.
Clear and explicit descriptions of variables and variants
Describe variables and variants clearly and explicitly to ensure that people viewing performance reports understand what was tested.
This also includes describing the control variant for each variable. As you iterate through your experimentation, the control experience may change as you adopt variants. Having a record of the control experience for each iteration of your experimentation makes it clear what each variant is compared to.
In practice, a variable description generally includes a component and a characteristic of the component that will be tested.
Examples of generic and hypothesis-based variable descriptions:
|Generic Description||Hypothesis-based Description|
|Hero image||Hero image in which the subject is looking at the call to action will draw attention to the CTA and increase click throughs|
|Add-to-Cart button location||An Add-to-Cart button that is located above the fold will be more obvious to users and increase conversions|
|Value prop text||Savings-focused value propositions that have a prominent appearance will stand out to prospective customers and increase sign-ups|
|Page URL redirect||A page design that is tailored to the visitor's source|
Use unique identifiers for referencing variants
Using unique identifiers for variables and variants makes it easy to communicate with team members throughout the project lifecycle, from ideation to development to performance analysis.
For example, consider these 3 variables and their variants:
|1 - Button Color||2 - Button Border Color||3 - Button Text|
Testing alternate experiences using a redirect variant
Web-based projects can leverage a special ‘redirect’ variant type to load alternate URLs for a given page in the digital experience.
Learn more: Creating a redirect variant (URL split test)
There are 2 way to implement variants:
Evolv AI Web Editor
The Web Editor is for projects in environments that use the Evolv AI Web Client, or snippet, in their digital experience.
The Web Editor also enables easy QA of individual variants and combinations of variants in a production-like experience.
Variants implemented through the Web Editor are injected through Evolv snippet into the digital experience with all the code they need to render.
To learn more, read Creating variants in the Web Editor.
Teams that need to implement variants through their system code can use variants as feature flags. In this case, variants are set up with distinct values that are used to enable flags in your codebase.