In Analytics, where I spend much of my time, there’s a constant drive to test – layout, content, images, form fields, pretty much everything on a page. We do this to constantly improve on the experience, with common instances being how a conversion funnel functions, to determine the best way to present an article, find the best length for videos, or even the optimal number of slides in a slideshow before users get bored. Given that Google is always looking to give users the best experience possible, conflicts can occasionally arise between the functions of analytics and CRO and the needs of SEO.
Consider the following example from an SEO’s perspective:
Background: A key page on the site has been driving conversions exceptionally well. The obvious difference between this page and a lesser-performing page is that this page has a lot more content.
Action: Adding more content on the lesser-performing page to a similar long length.
Result: Instead of seeing increased conversions, nothing is happening. The page isn’t improving. It’s not doing anything at all.
Why?: Unbeknownst to the SEO, the Analytics team revamped the page to accommodate the new form layout they are testing, thus stripping out most of the new content.
It’s typically the case that the Analytics and SEO teams are unaware of the impacts that changes in one area may be having on another.
Here’s an example Analytics teams may be familiar with:
Background: A Variation test page has gone live, and is doing well. It’s by far beating the (boring) Control. Everyone is happy about the Variation’s performance and decides to use that page in place of the Control page.
Action: The Control page URL (site.com/Control) is pulled down, and the links on the site are updated to go to site.com/Variation.
Result: That great new page lost link equity, dropped in rankings and increased in 404 errors.
Why?: If the original (Control) page could not be updated to reflect the Variation, a 301 redirect from the Control to the Variation should have been put into place. This would help preserve most of the existing link equity and existing ranking.
So should we accept our fate as two silos’d fields who can never seem to line up our skills and knowledge? Not a chance. Here are some things everyone should be mindful of while testing.
SEO-Minded Tips for Analytics Testers
Maintain the spirit of the page across all the testing variations. Avoid the feeling of a bait-and-switch by preserving the intent of a page – deliver an article, solicit a form completion, elicit a cart add, etc. Remember, even Multivariate testing maintains the same premise of the original page. If Google sees that the variation page(s) are vastly different than the original, it could be considered cloaking. For clarity, we are using Wikipedia’s definition of cloaking: “Cloaking is a search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the user’s browser.” The typical, standard use of testing tools is not considered cloaking, by Google, but going from multiple very different pages, particularly across different domains, even as part of a legitimate test may give Google pause.
Use the rel-canonical tag
Help Google bots understand priorities by telling them which is the original or main page by using a canonical tag. Similar to cloaking, typical use of testing tools is usually fine in the eyes of Google. However, multiple similar pages, without different URLs and/or domains, could cause problems with duplication and deception in the eyes of Google.
Consider load time and page speed
Most testing platforms run asynchronous code anymore. Where a slowdown can happen is in how the code is triggered – front-end versus server-side. While it’s typically an inconsequential slowdown, for a site teetering on the line of ‘acceptable time’ in site speed audits, it can be enough to push it over into ‘unacceptable’.
302s are necessary, not 301s
Google needs to know that the test is temporary. Without this, Google may interpret the new page as what should be indexed, perhaps even removing the ‘correct’ page in the process.
Run what’s needed, but don’t run forever
Once a test has concluded, be mindful to update the site with the winning component(s) and remove the test variables. Having tests run for an unnecessarily long time can be viewed as deceptive by Google and other search engines, and they may penalize the site. We know some tests end in a stalemate and no matter how long they run, a statistically significant winner is not going to happen. Eventually, they will just need to be called.
Analytics-Minded Tips for SEO Testers
Test with purpose
You’re capable of changing consumers’ experiences with your website. Make the test worth their while as well as yours. That’s not to say never test a button color, as such tests can be beneficial, but just be cognizant of the experience you’re creating and what you’re trying to learn.
Watch for redirect overwrites
Always QA the implementation both before the test launches as well as when the test was launched. Conditions can change between development and production (ideally not, but it can happen). An example is with Google Analytics – there have been cases where the hostname had been appended as the referring source, overwriting the original source. All test data looked like it came from Direct sessions.
Goal completion is key, but don’t ignore the contextual
Each test should have a purpose and a key metric used to measure success. Common metrics include form completions or checkouts. However, consider follow-ups from the form of completers and their lifetime value. For checkouts, what was the average order value? It is really about looking deeper than the surface to understand if an upfront ‘win’ is coming at the expense of diminished performance further down the funnel.
Patience is a virtue
Calling a test too early – either believing a certain variation is going to win, or that another is the loser, without statistically valid results can cause serious flaws in the data and subsequent interpretation. Tests will not always produce a result, and that’s OK. It’s about being patient enough to wait for that conclusion to become valid.
Don’t run overlapping tests
When running test, it is always advised that the audiences of such tests remain separate, or a test is reconfigured to account for multiple variations and experiences. Otherwise, data can become muddied as users may have been exposed to multiple tests at once and one test may have inadvertently influenced the performance of another.
No matter what our primary field is, when we’re testing elements of a page, we should be legitimate, smart, and mindful at every stage. These considerations should help you come up with a testing plan that works towards your site’s goals, a positive experience for your users, and keeps you in good standing with members of your team.