SEO: Click Through Rate and Bounce Rate

I’m going to take issue with Rand Fishkin of SEOmoz. I think his most recent White Board Friday video is just plain wrong. Normally, I have a lot of respect for what SEOmoz does, but I think the advice and implications are not just wrong, but dangerously wrong.

How Does Google Rank Results

I don’t know all the details. Rand doesn’t know all the details. Some guys at Google know a lot of the factors. Matt Cutts, Google’s head of the search quality team, claims over 200 factors go into ranking.

What we do know is that backlinks – credible links regarded by Google as likely for a search user to visit – are important. We know that anchor text is important. There’s some other factors that we know influence Google ranking.

What *else* do we know?

We (professional search engine optimisation people) know that on-page content is valuable. For low competition keywords – keywords where there aren’t a lot of links and anchor text, and hardly anyone searches – then page content is enough. Look at the example in the graphic below. There’s precisely one page on the internet, with that text for something that I can’t find on Google. When I wrote that, it was true; if you search now, you’ll find that page. Well, until some spoiler copies it elsewhere…

However, try putting the word “bad credit loan” on a page on a new web site with some other relevant and unique content, valuable to a user, and see how high you rank for the term. You can wait. And wait. And wait. You’re not going to show up on the first page of results, just by having a great page alone. It’s not just the content, it’s the backlinks that make the difference.

So we now know, as a result of this test, that while Google does pay attention to on-page factors, they also pay attention to backlinks. And in competitive spaces, *effective* backlinks count for more than the page content.

The important message to understand from this is that different factors apply under different conditions. Content alone won’t put you on page one. Backlinks alone won’t keep you there.

Click Through Rate and Bounce Rate

So, at some scale, do CTR (Click Through Rate) and Bounce Rate make any difference? I believe they do, and this blog is a testament to that. Look at this screenshot.

That’s a Google Analytics shot of the last 15 months activity for a specific page on the Merjis blog. It’s all about “gclid” – something you’ll probably care about if you do paid search and look in web server logfiles.

I’m using this blog as an example, because I’ve been using it for tests for years – I know how it works, and it isn’t confidential client data. I can reveal the usage, because I have my own reasons for running a blog, and few of them directly have anything to with making money.

Most other pages on this site get a profile like this other example:

This is pretty typical for a “newsy” blog article. Usage on the day that it is written, and a dribble thereafter. It then usually dries up after a few weeks, because the rank has decayed with time.

So why, with a higher bounce rate, does the older article do better than the newer article in rankings? If Bounce Rate is important, then surely the lower bounce rate in a newer article must mean that Google should drop the older article?

I suspect that Google doesn’t have a rigid number. They look at how well you do relative to other sites. And especially, they look to see whether search users search again for the same or very similar searches. Read that article on SideWiki, and it’s lightweight. No real information. No real recommendations. The long lived article on gclid has a much higher bounce rate and longer reading time. It’s the reading time that’s the clue. When you’ve read my article on gclid, you probably don’t want to read another article about gclid. It’s reasonably definitive.

Google sustains that old article in search results, despite its’ great age, and despite a high bounce rate, because those users who do read it, value it. It’s there, because it helps Google to deliver a page of search results that users value more than *without* that article present.

Uh – You Didn’t Mention CTR

Again, I don’t think it is actually CTR that Google is looking for. It is user satisfaction. So a high CTR, caused by a misleading piece of copy, won’t help. You have to deliver what you offer. Again, I don’t think that Google is measuring conversion, either. But a high CTR message with a high conversion rate, meaning that users are highly satisfied – that’s what Google wants you to make.

You won’t be directly rewarded for high CTR – but you can measure it (especially if you also run PPC and can get the impression rate). You won’t be rewarded directly by Google for high conversion rates. But Google does appear to prefer sites that answer the question posed by the search query. And the proxy that can be used by Webmasters, who don’t have access to Google’s richer data, is their own performance, as CTR and Conversion Rate. Increase those, and you are more likely to increase position.

Interaction of Factors

If you have a good site, with highly relevant content, you tend to get more links. So disentangling backlinks, and the immeasurable relative user satisfaction, is difficult. Pretty much the only way that I know it can be done, is when you have web sites with accidental misbehaviours that create the right conditions for a test. The technical problems that create the conditions are rare – and recreating them in a real website is likely to decrease the performance. It’s unlikely that anyone will give you the opportunity to mess up their site, just to prove what works.

However, if you want to go about it… Here’s what I think you’ll need:

  • A visibly horrible page, with a low conversion – as your starting point
  • Weak Title and Meta Description as a starting point
  • A lot of visitors per day – it takes a long time to demonstrate, otherwise
  • The ability to make sitewide link changes to the page under test
  • Good backlinks – you’ll want to know that you *could* rank well on page one

Change the URL for your horrible page, sitewide. Wait for Google to find it and rank it again. Note the position. Watch the position fall over a period of a week or two (depending on visitor volume). Now improve the page, and switch the URL again and wait for Google to find and rank it. Then watch the rankings change and note which way they go. Now revert the page and switch URLs again, and this time change the Title and Meta Description. Now watch the ranking changes. Now fix up the page again and once more switch the URL and watch.

You should, IME, find that you achieve a higher long term position when you have a better title and description, and a higher converting page with a lower bounce rate. If you can explain why you *shouldn’t* get a higher position with a site that is better for users, I’d love to know the reasons. But don’t make your explanation involve “gaming” the system.

And, FWIW, I don’t believe that the Title and Description are important, as direct factors for SEO. You can rank perfectly well for keyword free pointless titles, and descriptions without keywords that are positively turgid and rambling. However, show the user that you are focused on solving their problem, and your CTR increases; and if you are focused on the user, you’ll probably have a reasonable landing page, which will engage and convert better. Google’s not going to reward you for a better snippet, directly, but for a better user experience. Your only measures though, will be what you can observe – CTR, Bounces, Conversions. If I could tell you to look at the “re-query rate”, I’d tell you to do so – instead, you’ll have to use the information you can get.

Implications For SEO

If a blog article can decay to little traffic in a few weeks, or sustain rankings for years, on the same blog, with the same blogging software, then the difference must be backlinks? Well, not substantially. Over the years, I’ve had more backlinks to newsy stories, but still this “gclid” article keeps ranking. And all the time, the other lighter weight articles just keep falling out of the listings.

There’s a few other similar articles on this blog that rank, and stay high for years and years and years. Non-competitive searches, but of long lasting traffic value. And the other sites that I’m competing with, for attention, are large forums. High weight. Much more frequently updated content. I’m *deliberately* not trying to place links for articles. Just letting what happens, happen – so I can understand why it happens. So there’s no contamination effects here with deliberate link placements.

What are the articles? They all tend to be like that gclid article. Something that is detailed, informative, and means that you can go away and do something. Useful articles, in other words. Harder to write than “straight news” articles, as you need unique content, written to address the audience. That’s part of my reason for writing – attempting to develop clearer communication.

The clear implication is, I think, that useful content matters. And how do we know it is useful? It’ll show up in search engine rankings, usablility data and other disturbingly hidden and arcane resource. Google will reward useful content with a better sustained rank – but won’t put you on page one just because you have a great article, unless you have some backlinks to create credibility.

But How?

Rand makes the point that data about use can be gamed. But so can backlinks. That’s the major part of undeclared paid backlinks, small world building, and other “black hat” techniques. We know that Google sees through most black hat techniques, given time.

We also know, or can find out about, Google’s interest in invalid impressions and invalid clicks. For example, invalid impressions are generated when search engine ranking tools are run – they reduce the effective CTR. Invalid clicks are generated when users double click, or are paid to click. Just as with paid search, these two types of invalid activity are measurable by Google. In fact, Google can measure a lot more than a webmaster can see.

We webmasters only get to see bounce rates and conversions. Google gets to look at whether users search again. Much more valuable. If you want to build the worlds’ best search engine, then you want to feature the results that tell you that you’ve got a winning page – pages where users don’t need to search any more. Results that have users positively selecting that site again, when they see it in listings. Webmasters just don’t have that detail, directly. We just don’t know if the other guy answers better – unless we expend effort to learn our customers’ minds and make sure we have the best answer.

User Experience

Google’s Ten Things lists, first, “Focus on the user”. The results from this blog, and from other client activities that I’m not going to reveal in any detail, are fairly clear. Content that Google can measure as being liked by users, rank better and longer than content that is spammy, tedious and weak. The factors that lead to better rankings will include appropriate Titles and Descriptions and engaging content. It has to be, or rule 1 is broken.

We know that Google has experience of measuring impressions and data to look for invalid data. We know that Google is pretty good at it – or there’d be more click fraud problems with AdWords. So, if it can be done, and it is an important indication of quality, why wouldn’t Google use searchers behaviour to modify results, not just personally, but across the index?

Why can’t you improve the results when you click on your own listings? Because it is identical behaviour to the banned AdSense practice of clicking on adverts on your own site. Detectable. Invalid. Not counted. And for reasons that I don’t want to go into, I believe the same will be true of botnets and eLance and Mechanical Turk attacks. There will be a signature associated with them, that doesn’t match normal user behaviour. The signatures can be spotted and countered, by assigning the activities as invalid – just as it is in AdWords. Since AdWords continues to run without being infested with click fraud to unusable levels, we have a working system, on a global scale, that shows that user behaviour can be extracted from noisy fraudulent behaviour.

It isn’t perfect, true, but it separates AdWords from being a system that solely acts to transfer advertising funds to thieves, into a system that, more often than not delivers prospective buyers to an advertiser’s site. It isn’t perfect, but it works well enough. AdWords only works because it identifies and categorises user behaviour.

User behaviour categorisation works in one system that Google has, worldwide, on a service with measurable economic value. Why wouldn’t it be usable in organic search results?

Conclusions

Failing to identify and understand user interests is an SEO mistake. These are reflected by (but are not completely explained by) CTR and Bounce Rates – because that’s about the best that Webmasters can get. Google doesn’t have to use those – they have better numbers that are more meaningful to user experience. But saying that “Google doesn’t use bounce rates” is not the same as saying “Google doesn’t take account of user behaviour”.

Unlike Rand, I believe that Google cares very deeply about the user experience, and that Google has very sophisticated technology, probably shared with the Google AdWords guys, to identify unusual search behaviours and exclude them from consideration.

Given enough data, probably gained from multivariate testing on all the different data centres, Google can identify whether users are more, or less, satisfied by different ordering in search results than a pure backlinks-plus-content model would give.

Small scale tests probably won’t show anything about user interaction – because the activity doesn’t have statistical significance or because the signature of strange search activity is too obvious. So, don’t try faking it – if you’ve read this far, you probably aren’t smart enough to outwit Google’s teams of click-fraud defence guys. They are really pretty good, as anyone with a rational assessment of AdWords click fraud levels will tell you. Not perfect, but good enough to make the effort of using AdWords worthwhile, rather than primarily a way of siphoning your advertising funds to fraudsters. 🙂

Why do I say “if you’ve read this far”? Because if you really knew how to hide click streams, you’d be doing it with AdSense. And you’d have stopped reading at that point – because you own the game already. If you can’t own that game, you can’t own the game of spoofing user behaviour in organic search – it is (not identical to, but close enough to) the same game. At the moment I don’t understand why you’d bother with SEO behavioural spoofing, if you’d gamed AdSense, because the revenue is a lot more direct… Maybe that’s why Rand hasn’t spoken with any black hatters that have cracked it?

And if Google can detect unusual impression and click data, then they can fulfil their primary mission, with respect to modifying organic rank based on real user data about preferences and satisfaction.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top