7 Simple But Overlooked SEO Audit Tips

Home » 7 Simple But Overlooked SEO Audit Tips

7 Simple but Overlooked SEO Audit Tips | SEJ

SEO audits. I love ’em and I hate them. There is nothing more gratifying than uncovering that one issue that’s keeping a client’s site from performing well. The downside is that it can be very time-consuming. Everyone has their own methods and checklist when doing an SEO audit.

Sometimes we get into a rhythm and never think beyond our own checklist. Adding to your checklist is the key to an effective audit. Especially when new technologies and advancements in search become commonplace. Three years ago, you had to check and verify Google authorship to ensure it was set up correctly, today not so much. If you work with a client that is having issues with stories showing up in Google news—your checklist will be different from that of a standard SEO audit.

In some cases, something very simple can cause a very large problem. I’ve compiled a list of the seven most common issues I’ve helped businesses withafter their original audit was inconclusive.

What’s Your Status?

Status codes are easy to overlook. Check that your 404 pages returning 404 status? Are your redirects 301? Are there multiple redirects? You can use thisChrome plugin to easily see that status code and the path of redirects. I’m surprised by the number of sites I’ve come across that have 404 pages not returning a 404 status.

Duplicate Pages

Do you have two home pages? Splitting your PR by having two home pages can be both a duplicate and pagerank issue. You’ll find this mostly in sites that have a “home” link. This is still a common issue, so be sure to check for it.

Many SEO audits look at content across the web. However are you checking for duplicate content across the site itself? Certain SEO scrappers look at duplicate titles and meta description tags, but not the content itself. Yes, duplicate tags could be a sign of duplicate content, but not always.

What’s Your Preferred Domain?

www vs. non-www. Are you checking for those? Then I assume you are also checking https:// and www.https:// I’ve come across a few of these issues, especially since Google’s announcement of the “benefits” of going SSL. I even came across one site that had one of each of these versions – quadrupling the pages indexed in the search results. If you run across this issue be sure to use the Moz toolbar to determine which site version has the best link signals. Redirecting the version with greater PR to the version with the lower PR could cause some temporary ranking drops.

img2

Conversion Pages in Search Results

Check the search results for “odd” pages. It’s not uncommon to find old legacy pages floating around. Conversion pages in search results are still common, especially in sites using WordPress. Check to ensure these pages are not indexed so users can’t stumble across them and throw off your goal tracking. Or worse yet, access downloadable content for free.

Orphaned Pages

Keep an eye out for orphaned pages as well. These are pages that aren’t connected or were previously linked to. In some cases, the site was redesigned and those pages were forgotten. I’ve seen cases studies created, but then not linked to from the site. This can result in a lot of wasted effort. These are sometimes pages only found in a sitemap.

Check Your Sitemap

Are you finding pages in the search results that can’t be found on the site? Check the sitemap. Outdated sitemaps can cause issues. If your sitemap contains redirects, 404 pages, or links pointing to canonical (only canonical links should be in the sitemap) you will run into issues. Google will not index them. Check your sitemap report in search console to see how many pages Google is crawling from the sitemap versus how many it is indexing.

Also, be sure to check that site search results are blocked as well. This is usually overlooked. Site search can generate urls that you don’t want Google to index. Google doesn’t want to index site search results. These can provide a poor user experience, plus they can generate tons of 404 pages.

Blocking Pages From Search

Robots.txt is the most common way to block pages from search. However, a page/site can still drive in organic traffic. If Google feels the page is relevant to a user’s search query it will still show that page even if it’s blocked via robots.txt file. The best way to remove an already index page or site from the SERP is to no, index it using the no, index tag or X-Robots-tag.

Verify your developers have best practices in place. I would recommend that all developers at least have a checklist to check against.

What do you think is commonly overlooked in audits? Let me know in the comments!

Source – SearchEngineJournal.com