Google AdWords – Experiments with Multivariate Testing

September 16th, 2010 Leave a comment
Like the article?

If you inquire about the best AdWords strategy with two different search marketers, you may end up with more options than you can possible handle. Google AdWords can be quite challenging to break into, especially for those new to the field of online ads and campaigns. The good news is there is a growing set of tools designed to help you test and identify top performing variants in your search campaigns from different keywords, bid levels, ad variations, and campaign settings.

One of Google’s less utilized features (specifically, because of its perceived complexity) is performing split and multivariate tests on different versions of campaigns. Ironically, these tests can be extremely helpful to identify the optimal settings for your individual campaigns:

Google AdWords Experiments

AdWords Campaign Experiments (ACE) can provide unique tagging elements for various tests via a parameter {aceid} used to segment various campaign results. By testing variations within your existing campaigns you can determine (with statistical significance) the impact of various shifts in your settings. There are multiple ways to test differences within your ad groups and campaigns – you can directly evaluate “in line” edits by testing specific variants in bids or settings to identify the resulting impact ceteris parabus (all else being equal).

Identifying these controlled shifts in bidding strategies can help you optimize within your existing structure, while Ad Group Experiments can help you to optimize the structure itself for further improvements.

Structural Optimization by Setting up Experiments

Evaluating the structural settings of your campaigns and ad groups through experiments can help you identify the results of changes relative to a control group. Your existing ad group structure can form a control and you can test the results relative to experimental groups so you can test differences in settings, such as ads (text or image, with unique creative displays), destination pages (at the keyword level, so you can test landing pages versus on-site delivery), and bid types (manual CPC or CPA optimized).

Maintaining a control group can help ensure you collect enough data to analyze the results with a high level of confidence. The full list of options you can test include adding new keywords, ad groups, match types and dynamic ads within the search network, along with additional targeting options and creative through the display network. Ideally, a test should be defined in terms of increasing conversions (as defined in the conversion tracking or imported via Google Analytics).

Interpreting the results of Adwords Experiments

During an AdWords split experiment, you can determine the results of each individual element along with the statistical significance of the data. AdWords reports the confidence interval based upon the collected sample, which is displayed graphically with arrows: one arrow means the results show 95% confidence, two arrows indicate 99% confidence and three arrows indicate 99.9% confidence.

For each unique metric, there will be a reported confidence interval: allowing your campaign to run for a full, comparable time period ensures you can evaluate a full set of comparative metrics. Giving your experiment enough time to control for effects (such as measuring the quality score of new ads or keywords) helps deliver a higher level of confidence in the results. After a test is complete, you can determine whether to apply the changes from the test or remove the changes to return to your control group. Running these experiments continually can help further refine your settings to optimize your campaigns over time.

Help us spread the word!
  • Twitter
  • Facebook
  • LinkedIn
  • Pinterest
  • Delicious
  • DZone
  • Reddit
  • Sphinn
  • StumbleUpon
  • Google Plus
  • RSS
  • Email
  • Print
Don't miss another post! Receive updates via email!

Comment