From an analytics point of view, goal setting is the number one priority. Before we discuss variables, time frames, methodology or sample sizes we must understand the big picture. A background discussion of program objectives with the account and creative teams is helpful to set the stage. The usual scenario would be to design an A/B or an A/B/C test to measure which of two landing pages drives a desired behavior such as enrollments to a patient relationship marketing program.

Once the objectives are determined, the next step is figuring out what to test and how to test it. This sounds like an easy task, but can become increasingly complex when timing and budgets are factored into the equation. The best approach is to optimize the initial design concepts by conducting qualitative analysis. This is a good approach since focus groups can be executed with minimal expense.

Creative: getting started
Creative assets possess two kinds of qualities; those that can be measured and acted upon and those that are purely subjective. While both can be tested, only the former can be acted on. So, it is important to distinguish between the two. Examples of things that can be accurately evaluated include layout configuration, navigation, type treatment and form design. Conversely, creative treatments are much more difficult to evaluate. The problem is that web pages are a combination of the quantifiable and the subjective, and this is why the test design and the sequencing of testing are so important.

From a creative perspective you are better off establishing your look and feel first through qualitative evaluation independent of the exact layout and execution. Once a look and feel is established, you can use quantitative testing to evaluate the relative effectiveness of various layout configurations.

Analytics: testing
We recommend testing three designs, using a comprehensive approach of utilizing eye tracking tools and follow-up surveys of panelists to understand which creative elements resonate strongly with patients. This feedback is invaluable to understand what motivates potential patients to read content and become actively engaged with the form.

The creative team should be consulted regarding content for questionnaires, but input from the analytic team is vital to make sure that questions aren’t biased. The best questions ask for feedback on distinct elements in a passive voice.  

Creative: interpreting test results
When testing web creative elements, it is important to recognize that many testing environments bear little resemblance to the actual circumstances in which people will be interacting with your site. Often subjects will be working in an unfamiliar environment on an unfamiliar computer. Perhaps most importantly, they may be with a moderator, a group or an interviewer. It is very difficult for someone to read, interact and react in a concentrated manner in this circumstance. That’s why in-market testing is so valuable. Simple A/B testing can accurately demonstrate higher performing designs—especially in terms of measures that are quantifiable, such as click-through rates, enrollment rates, form completion rates, etc.

Creative: variables
One of the risks of creative testing is that acting on the results of a test can actually invalidate those same results. Assume you put three webpage executions into testing and learn that subjects like the navigation from one, the imagery from the second and the features from the third. There’s the temptation to assume that if you combine the preferred elements you’ll have a winning design. But the truth is the result is probably going to be a Frankenstein’s monster. It will likely be so different from any of the original designs put into testing that you can’t assume that the success of the parts equal the success of the gestalt. This is because small changes in creative execution can have an impact out of proportion to the relative size of the change on the overall perception and success of a design.

Analytics: variables
The most common error people make at this stage is testing too many variables at once. The best case is to test three designs which have two to three distinct differences in design. For example “design A” might combine educational elements with a one step enrollment form. Design B may have similar education elements, an interactive device, and a large callout for the enrollment form. Design C might be similar to B, with a video instead of the interactive element.

The incorrect approach would be to have completely disparate designs with different value props, color schemes, link elements, videos and a combination of multiple or single column layouts.

The goal is not only to have a distinct favorite design, but to understand why people prefer it. In the first example a distinct winner is likely and the proper course of action will be clear. In the second, a winner is also likely, but there is a high probability of “information overload” among panelists. Isolating the elements to change will be difficult and confidence that a revamped design combining elements cherry-picked from each version will be almost impossible.

Once the qualitative results are in, it is important for the analytics group to have active discussions with the creative and account teams to properly interpret the results and set expectations about what elements of the new designs can be tested in the future.

Ideally, the timeline for the project should include a live pilot phase to actively test two designs in the field. This is the time for quantitative testing to confirm that a preferred layout actually performs as expected.

Creative: “best design” conundrum
Of course, the dilemma for the creative team is that the winning design is not always the best design. It can be very surprising what works—and this is never truer than when it comes to landing pages and form design. Designers and agencies have their best practices —hard won knowledge about what works best. What’s so interesting about the testing process is how often the best practices are shown to not work as expected. When the results differ substantially from accepted best practices you can expect to get pushback not only from the creative team but also from strategists.

Analytics: defining objectives
This is the phase where explicitly defining objectives is most important. Work with the account and creative teams to define the criteria for determining a winner. Don’t just throw in a bucketful of measurement for things that are “nice to know.” This will likely cause everyone to “miss the forest for the trees.” For example, if the goal is acquisition, the correct measurement is: “Which page drives a higher percent of people to the enrollment form?” If the goal is to increase knowledge, then the defining measures will be based on: “Which page has a higher click rate and/or a lower abandon rate?” It is important to set a numeric goal for success such as: landing page “Alpha” must have a 5% higher click count on product information than landing page “Omega” (see chart above). The creative team should be on board with a plan of action if the goal is not achieved. At this point, designs must either be modified further or all should be comfortable with going with a directional result.

Analytics: final thoughts
Finally, keep in mind that testing should not be a one-time process. A comprehensive plan should include follow up testing down the road at a defined period such as regular one or two-year intervals. As current events always remind us, market dynamics are always in flux. As such, we should constantly strive to improve on past results and to adjust to changing market conditions. With proper internal communication and clear goal setting, programs will be optimized to their fullest potential. n

Robert Egert is director of creative and user experience at DKI and Louis Winokur is director of analytics at DKI