How To Check To See If Blocked Pages Are Indexed

blocked web page

Reading Time: 2 minutes

Share This

You put a robots.txt on your site expecting it to keep Google out of certain pages. But you worry – did you do it correctly? Is Google following it? Is the index as tight as it could be?

Here’s a question for you. If you have a page blocked by robots.txt, will Google put it in the index? If you answered no, you’re incorrect. Google will indeed index a page blocked by robots.txt if it’s being linked by one of your pages (that do not have a rel=”nofollow”), or if it’s linked from another website.  It doesn’t usually rank well because Google can’t see what’s on the page, but it does get PageRank passed through it. What a waste! Google will probably give you a snippet like this:

robots.txt snippet

2 Ways To Check For Indexed Pages You Thought Were Blocked

Don’t worry, I have a couple relatively painless ways to check your own indexation.

Using Google.com (More Manual)

Visit your robots.txt file and look at the blocked directories.  Let’s use Toys R Us for example:

  1. Check http://www.toysrus.com/robots.txt
  2. Take the first blocked directory (as of 3/22/2015, it’s /search/)
  3. Query Google.com using site:toysrus.com inurl:/search/ (this will attempt to find any URL that has /search/ on the toysrus.com site)
  4. Take note of any listings stating a description for this result is not available because of this site’s robots.txt 
  5. Repeat with all the other blocked directories
  6. Find all linking pages and determine your best course of action (eg, a nofollow attribute, a meta-noindex, the “remove URL” from Google Webmaster Tools)

In some cases, this trick will result in noisy results. If you tried this example above, you probably didn’t find blocked URLs until about page 5, where you see “In order to show you the most relevant results, we have omitted some entries very similar to the 47 already displayed. If you like, you can repeat the search with the omitted results included.” Since Toys R Us uses “search” as a parameter, Google tries to show it. This is not the “search” we’re looking for.

Summary

SERPitude is certainly a great tool for purposes as well, but solid for understanding what the SERPs are showing your searchers. Now that you identified your blocked pages, the real fun comes in tracking them down, deindexing them, and plugging the links with a rel=”noindex”.  Go to it, Sherlock!

Share This
Bill Sebald

Bill Sebald

Managing Partner

I've been doing SEO since 1996. Blogger, speaker, and occasionally teaching at Drexel and Philadelphia University. I started Greenlane in 2005 to help clients leverage search marketing to hit business goals. I love this stuff. Visit my profile page.

Follow Me on Twitter

Leave a Reply

Your email address will not be published. Required fields are marked *

More Related Articles

The Big List of PPC Resources and Articles
Digital Marketing PPC

The Big List of PPC Resources and Articles [Updated for 2021]

Whether you’re new to the world of PPC or have been in the field for years, it’s important to stay up-to-date on the strategies and tools that can make your work more efficient. Luckily, the Paid Media team at Greenlane…
Continue Reading

Accessibility for SEO
SEO 2 Technical

Accessibility for SEO

If you are interested in some accessibility support for your SEO, or if you’d like to inquire about our services, we would be happy to chat with you. In this presentation for Pubcon, we talk about web accessibility as a…
Continue Reading

HTTP2 A New, Improved Web Protocol
Digital Marketing SEO 2 Technical

HTTP/1.1 vs HTTP/2

A Brief History of HTTP If we’re going to talk about the transition from HTTP/1.1 to HTTP/2, it’s worth taking a quick trip back to the flannel-soaked 1990s. Back then, HTTP was a simple protocol for a simple job –…
Continue Reading