Testing & Optimization Part 2

In the last newsletter, we presented an overview of testing and optimization practices. This month we'd like to focus on the experimental design and technology options available for implementing those techniques.

Experimental Design Overview
There are some basic principles to follow as you design the experiment. Experience will guide you going forward but, in the beginning, the following tips may prove useful.

  • Be specific and document. Before beginning any experiment be sure to be specific about the null hypothesis, the alternative hypothesis, the methods used for data collection, as well as the models that will be used to analyze the data.
     
  • Start with something that is easy to structure and explain to others. Starting with single variable tests will allow for some ease in the implementation and interpretation. Even with the needed expertise, starting simple can be a great way to prove value and convince the organization to implement a full scale testing program.
     
  • Make sure the tests are statistically significant. Calculating the required sample size is an important step in designing any experiment. Using too small of a sample allows for a much higher probability of false conclusions. Using too large of a sample size results in an expensive use of data, which includes driving traffic through a sub-optimal site. Actually calculating the required sample size depends on many things including the type of test. For example, in tests where conversion rate is measured in a yes/no type scenario (binomial distribution), the required sample size might depend on the expected conversion rate, level of confidence desired, as well as the range of acceptance. With the given variables, statistical software will usually offer a means of estimating the needed sample size. After you have collected the data and can replace the estimated data with real information (such as the conversion ratio), be sure to go back and review this calculation to ensure you did indeed collect the needed amount of data.
     
  • Ensure you are collecting valid data. The selection criteria for assigning visitors to test groups must be random. Visitors must remain in the same test group for return visits. This obviously cannot be done with perfect accuracy (due to blocked cookies and cookie deletion) but every attempt should be made to place repeat visitors in the same groups. A method of tracking results must be in place prior to the beginning of the test. Ideally, the time frame should be chosen to negate day/time/portion of month biases from the experiment.
     
  • Choose the correct model. There are many models available, and it is crucial to select the appropriate one when designing the test. The correct model will again depend on many things, especially the type of data you have collected. Make sure your statistical model is appropriate for what you are trying to accomplish.
     
  • Publicize successes. Although this may seem obvious, it is an important step in ensuring future resources for further tests which will allow for greater results.

Choosing the Variables to Test
Choosing which variables to test is not an easy step in the process. The possibilities are infinite and it is not something that can be calculated. All websites have certain variables that are going to produce better results than others, but it can be difficult to identify them. Experience may help in quickly identifying low hanging fruit, but even with experience there may inevitably be some unexpected variable which has potential to produce great results.

The following variables are some of the fundamental places to start:

  • Images / creative
  • Offer
  • Call to action
  • Application functionality
  • Layout of page
  • Copy
  • Required / non-required fields on form pages
  • Speed / size of page

Size, order and color can also impact the effectiveness of each of the above items.

Implementation Technologies
There are 4 basic ways to implement a test:

  • Place code directly on the site
  • Use an ASP Service
  • Install software on the server
  • Install hardware within the network

 

Place code directly on the site-
The first option requires placing code directly on the site, which handles the assignment and tracking of visitors into the various test groups.

Advantages:

  • Cost effective approach for simple tests.
  • Total flexibility to tailor solution to your exact needs.
     

Disadvantages:

  • Difficult and costly to implement advanced multivariable methodologies.
  • Significant expertise required to create the code in such a way that will ensure statistical validity.
  • Typically offers fewer options for efficiently implementing test pages.

 

Use an ASP Service-
Another option would be to use an ASP service, which requires a small piece of code to be placed on each of the test pages that accesses more complex code from the ASP provider to administer the test.

Advantages:

  • Leverages code developed by the ASP for ensuring statistical validity.
  • No software, hardware or code to maintain.
  • Systems can handle automated modifications to test pages.

Disadvantages:

  • The fee structure is often based on the number of visitors who are tested or the total number of experiments run.
  • Not all vendors support using a 1st party cookie for visitor tracking and group assignment.
  • Tested content is sometimes required to be hosted by the ASP.

 

Install software on the server-
A third method would be to install a software agent on the web server that monitors all of the requests. It can also handle all of the assignment and tracking of visitors into the various test groups.

Advantages:

  • Leverages code developed by the vendor for ensuring statistical validity.
  • Systems can often handle automated modifications to test pages.

Disadvantages:

  • Must install and maintain software on server.
  • Requires up-front payment for software.

 

Install hardware within the network-
Lastly, a hardware device could be installed upstream of the web server that handles the assignment and tracking of visitors into the various test groups.

Advantages:

  • Leverages code developed by the vendor for ensuring statistical validity.
  • Systems can handle automated modifications to test pages.

Disadvantages:

  • Must insert hardware into network or reconfigure DNS.
  • Requires up-front payment of system or monthly fees.

 

As you can see, all four options have distinct strengths and weaknesses. It is important to evaluate the needs of the organization and the resources available in order to choose the best option for your organization.

 

 

 

By Bill Bruno
About the Author:

Bill Bruno is the CEO - North America, Ebiquity.

Contact Us Now