Businesses often face the need to evaluate the performance of Search Assistants with various index and search configurations to assign one. A variant in this context is a unique combination of index and search configurations assigned to a Search Assistant. To deploy the best suitable variant as a business user you have to compare among the given variants i.e., the competing approaches.
SearchAssist enables businesses with a testing methodology of conducting randomized experiments with two variants namely, A and B. SearchAssists provides you capability to test and evaluate the performance of a particular variant over the rest. You can test the relative strength of any two or more variants, to pick the better one over the others.
SearchAssist allows you to create experiments to continuously improve search relevance, test them like competing approaches in response to hypothetical contexts or time frames, among all the variants.
Consider the following scenarios:
Scenario 1: You have configured an index, and tuned search configurations to optimize search results. You have run it against test data in a controlled environment, but will these settings work with real-time data? Refer Analyzing Performance
Scenario 2: You have deployed your Search Assistant and have analyzed its performance. You want to tweak the index and/or search configuration a little. You have cloned the existing configuration and made the necessary changes. How can you ensure that these changes would work or not?
Using Experiments, you can find out which index and search configuration combination is more effective than the others. Each experiment can hold upto four variants A, B, C and D to split the traffic flow randomly and for a duration. It helps you:
- create variants up to 4 using unique combinations of previously created indices and search configurations
- run them live within the same Search Assistant by splitting the live traffic among the variants at a time
- see which variant performs better than the other
- measure the outcomes on metrics like clicks and click-through rates
Internally, every search is associated with a unique user identifier. This serves two purposes:
- ensures randomness. The Search Assistant creates sets of users – one for each variant. Whenever a new user arrives, they are randomly routed into one of the variants, based on a hash of their unique user identifier
- maintains the same distribution. If a user is assigned a variant, they continue with the same variant thus ensuring that the experiment conclusion is reliable
You can exercise further fine control on an experiment by:
- Specifying the percent of traffic flow throughput rate diverted to each of the two variants and/or
- Setting the duration of an experiment