This article reviews what projects are and the key stages of the project lifecycle.
Table of Contents
What is a project?
Projects typically differ by the audience and/or metric they are designed to optimize.
Examples of project use cases include:
|Example Project Name||Audience||Optimization Metric||Scope|
|Mobile Ecommerce Funnel||Mobile Visitors||Order Value||Full funnel, multiple micro conversions targeting different audience segments|
|Desktop Ecommerce Funnel||Desktop Visitors||Order Value|
|Product Detail Page||All Visitors||Add to Cart||Testing ideas that apply to any device, optimizing an intermediate conversion in a bigger funnel|
|Cart Page||All Visitors||Checkout|
|Subscription Funnel||All Visitors||Subscription Value||Testing ideas on a general audience. May include personalized ideas targeted to unique segments|
How does a project compare to an A/B test?
Each project can be set up to run like an A/B test, or you can take advantage of Evolv AI's Continuous Optimization methodology: an MVT project that combines variants from multiple A/B tests, with the ability to add new ideas whenever you want without compromising statistical calculations.
More in the next section.
Setting up a project
Setting up a project is the process of choosing the audience and metric you want to optimize and how your ideas should be tested. Adding the ideas themselves is the most important factor in setting up a project.
In the Evolv AI Manager, the project Draft is where all settings and ideas are managed.
- Continuous Optimization
Continuous Optimization is Evolv AI’s answer to the inefficiency and risks associated with A/B testing. It uses a multivariate testing methodology to learn which ideas have the greatest impact and leverages AI to dynamically deliver the best ideas to more people while minimizing the exposure of poor ideas
- A/B Testing
Not every optimization scenario requires the efficiency and flexibility of a long-running Continuous Optimization project. In these cases, A/B testing is an adequate methodology for answering simple questions.
The Project Audience is the primary filter for including visitors in a project. In projects that use Continuous Optimization, the AI selects which ideas to test by evaluating the Project Audience's performance against the designated Optimization Metric.
Learn more: About the Project Audience
The Optimization Metric is the parameter used to measure the performance of a project's audience. It serves as the compass guiding the AI's decision-making. This metric can be a specific visitor behavior (event) or a scalar value, and the system relies on this metric to decide which ideas should be shown to a larger audience.
Learn more: About Optimization Metrics
Project variables and variants
Optimization ideas for your digital experience can be expressed in two parts.
- Variables – A variable represents some aspect of the digital experience that can vary or change. Functionally, variables are like A/B tests; they include a control variant used to compare the performance of one or more non-control variants.
- Variants – A variant represents a distinct version of an aspect of the digital experience you want to test. Variables are comprised of variants.
Adding ideas to a project is the process of describing variables and the variants that should be tested for each.
In a project that uses Continuous Optimization, the system combines variants from each variable to create new experiences. The system uses the performance of each combination to calculate the expected performance of individual variants and decide which combinations to deactivate and which new combinations should be generated. In the process of generating new combinations, the system may also exclude variants it determines to be not worth testing at that time. These variants are considered 'dormant' when they no longer appear in any active combinations. Dormant variants may be revived at a later time to explore new experiences with other variants.
In an A/B Test project, you can create one or more variables, each with its own variants. However, the system will test the variants independently.
Learn more: About Variables and Variants
Once you’ve described your ideas as variables and variants, they need to be implemented to be tested in your digital experience.
The Evolv AI Web Editor is a desktop application designed to make implementing variants possible for technical and non-technical users.
Features range from defining the pages or "contexts" your ideas will appear in to implementing design ideas and QAing how they render together in a production-like environment.
The Web Editor has several generative AI capabilities that make it easy for non-technical users to implement their ideas. These capabilities include:
- Code generation – The AI will attempt to implement your variant design based on the description alone
- Image generation – Generate alternate images to test in your digital experience
- Text generation – Generate alternate copy for headlines, buttons, and more
Feature flag implementation
The Evolv AI platform also supports several SDKs that allow you to run projects in any tech stack. These can be set up by adding variable and variants through the Manager.
Publishing a project
Publishing is the process of deploying project changes to a specific environment. These changes may include activating or deactivating variables and variants, updating targeting criteria, or patching broken code in your project.
The publishing process assesses the latest changes to safeguard the integrity of statistical calculations in live projects. It may give you one or more options to proceed depending on the type of project you are running.
In the Manager, this means moving changes from Draft to Live by clicking the “Move to Live” button and confirming your decision to go live.
Learn more: Publishing a project
Running a project
Once a project is live, visitors matching the project audience will start seeing your published ideas. The system will begin collecting data on them and calculating their performance.
There are two aspects of a project to keep your eye on:
- Monitoring the ‘health’ of the project operations, such as traffic volume, events firing, and the rate at which the AI is exploring new experiences in the Optimization phase of the project
- Analyzing the performance of individual optimization ideas, whether they prove or disprove your hypotheses, and how those ideas perform in combination with one another
- Avoid running concurrent projects that share the same audience, optimization metric, and affect the same areas of the digital experience. Project performance is measured exclusively and does not consider ideas visitors may experience if they are included in multiple projects. This can add noise to your performance data as the experience may be inconsistent across the project audience.