Written by: Kali Wyrosdic

Have you ever tasked your team to audit a website, only to have them run it through a crawler and find that the site consists of a single page? That was exactly what we ran into recently with one of our client websites. The client’s site was robust to say the least (to the end-user through a browser), so when good ol’ Screaming Frog returned just 5 URLs we knew that something was up. It turned out that our client’s site was built in Ajax. Now, if (more like when) this happens to you in the future and your tech team says a site is uncrawlable, you can direct them to this post! You’ll never have to consider an Ajax website uncrawlable again.

Why Does This Matter?

Because up until recently, Google was unable to read and render JavaScript and Ajax applications. Historically SEOs would recommend recrafting these pages using less dynamic structures. Or, webmasters were required to provide “progressive enhancement” via onsite Hijax-links to make their Ajax applications crawlable. It all changed October 2015 when Google officially began to read and crawl escaped fragments – Google’s definition. Yet, as of the end of 2015, what Google suggests and actually does seems distant. These pages may be rendered, but their not being treated the same as static HTML pages yet.

More On Google’s Recently Deprecated Recommendation…

(Skip this section if you’re already caught up). The ca. 2009 recommendations have since been deprecated by Google, but if your team followed the recommendation don’t worry, it’s backwards compatible. Otherwise, they should follow Google’s most recent recommendations and simply unblock Googlebot from crawling any JavaScript and CSS files.

Join our mailing list!

Totally not required, but we'd love to send you stuff from time to time. Get new tools, blog posts and more!

 

It’s all well and good that Google can read and crawl escaped fragments, but many popular crawlers including Screaming Frog and SEMRush still cannot. Screaming Frog did add a feature to crawl Ajax, but it only worked if a site was configured to follow Google’s (now deprecated) recommendations. So if your team is already using something like Prerender.io for your site or your clients’ sites, there shouldn’t be any problem using Screaming Frog to crawl an Ajax application. However, the way these tools work is by creating cached HTML versions of Ajax applications and serving them to Googlebot when it triggers an escaped fragment.

 

Using something like Prerender isn’t terrible, it just serves Google cached pages, which can result in outdated versions of rendered pages being served if the cache freshness isn’t set or kept current. Not to mention it costs money, has to be installed, and then implemented. That’s a lot of work just to make something crawlable, especially if it involves training. So when we discovered our client’s site was built entirely in Ajax, we knew that we had to come up with a solution to crawl and audit it.  Here are the steps we took.

Use Google Analytics, XML Sitemaps & Search Console to Audit Ajax Sites

  1. Since Screaming Frog can’t crawl the site you’ll need to pull URLs from the XML Sitemap(s). The XML Sitemap(s) are found in a site’s robots.txt.
  2. Copy and paste the XML Sitemap from the robots.txt into your browser.
  3. Right-click and Save As, or choose File Save As, and save the file as .HTML. You can just change the extension from XML to HTML and click save.
  4. Then, open the saved HTML file in Excel as a read-only workbook. You should now have a column of URLs.
  5. Depending on how many XML Sitemaps exist, you’ll need to repeat the process for each and combine them into one list when finished.*
  6. If you did not have to pull URLs from Google Analytics or the Search Console, take your list of URLs and run them through Screaming Frog on List Mode and audit away!

* If you notice anything funky with the XML Sitemaps, like a Sitemap is missing URLs that you absolutely know exist onsite, you’ll need to gather a list of URLs from Google Analytics and Search Console in addition to the Sitemaps before continuing. 

To Pull URLs from Google Analytics:

  1. Login to G/A–>Behavior–>All Pages–>segment by Organic Traffic
  2. Make sure your dashboard is set to capture data from the longest timeframe possible.
  3. Export the URLs. You can only export 5000 at a time manually, or you can use the API to export all.
  4. If the export only give you the URIs, use Concatenate to add the domain to each so you have a list of full URLs again.
  5. Now add these URLs to your list of URLs from the XML Sitemaps and de-dupe.

To Pull URLs from Search Console: 

  1. Login to Search Console and select Search Traffic–>Search Analytics. Set the date range to capture the largest sample possible and select “Pages.”
  2. Scroll down and select “Show 500 rows.” Search Console will only export what’s visible. Click Download.
  3. If you’re only given the URIs, use Concatenate to add the domain back to each so you have a list of full URLs again.
  4. Now add these URLs to your list of URLs from XML Sitemaps and Google Analytics.
  5. De-dupe your entire list.
  6. Now that you’ve compiled a list of URLs from Google Analytics, Search Console, and any XML Sitemaps you can run it through Screaming Frog in List Mode.

The Results

Using this method, we discovered that a whopping 26% of the client’s top 500 pages (1300 pages total) were faulty 301 redirects requiring immediate attention, among other recommendations. If you have a client in the same scenario, this is a process you should really run through.

Coming Soon: A Custom Crawler From Greenlane Labs

Our dev team is building a custom crawler and site audit tool for Ajax-based websites to use for our enterprise-level clients, and we’re aiming to release it as a free tool in the near future! Our custom crawler will analyze the post-rendered Ajax pages, spidering links and auditing the rendered HTML after the entire page is loaded. Watch this space!

greenlane-labs-ajax-friendly-spider

Moral of the Story

Sweet, sweet freedom! Your team’s hands aren’t tied anymore when it comes to crawling and auditing an Ajax application, and you never have to settle for the excuse of “can’t crawl it, built in Ajax,” ever again. Now you have a solid process to find, export, and crawl the URLs of a site built in Ajax. Don’t forget to check back often (or sign up for our newsletter by clicking “Sign Up” at the bottom of this page) for news about the release of our *free* Custom Crawler and Site Audit Tool for Ajax-based Websites.

Happy auditing!

Do you have your own method of crawling or auditing sites that have been built in Ajax? Share it with us in the comments.

  • Madeline

    hi,
    is your tool coming soon ?
    I’m very interested in testing it.

    • Hi Madeline, thanks for your interest! We’re still in the internal testing phases, but will keep you posted as to when it’s released. If you want to be kept in the loop, you should sign up for our newsletter!

  • Will Bligh

    Hi Greenlane Labs, where are you up to with your crawler that can execute SPA js? Would love to see it soon!

© 2017 Greenlane. All rights reserved.

Greenlane's digital marketing headquarters is located just outside of Philadelphia:

2550 Eisenhower Avenue, A203, Eagleville, PA 19403 - (610) 973-7119

Privacy Policy    RSS

Subscribe to our Newsletter