Free SEO Audit

Get an SEO Audit

These are the general metrics, used in the audit of the website:

  • Pages Reviewed - The number of pages, that were crawled during a given audit
  • Site Health Score - This is based on the number of errors and warnings found on your site and their uniqueness. It is rendered and a percentage
  • Total Issues - The total issues, that were detected on the website during an audit, including errors, warnings and notices
  • Top Ranking Keywords - A list of the ten most common search terms for the website
  • Total Errors - 38 tests measuring issues of highest severity that were detected on the website during an audit
  • Total Warnings - 34 tests are medium severity issues that were detected on the website during an audit
  • Total Notices - 20 tests are detected noted elements on your website that are not considered issues; but, they have a recommendation status, and should be minimised.

SEO Audit Request

Specific Website Audit Tests

SEO Error Tests

Error Error Description
5xx Errors 5xx errors refer to problems with a server being unable to perform the request from a user or a crawler. They prevent users and search engine robots from accessing your web pages, and can negatively affect user experience and search engines' Crawlability. This will in turn lead to a drop in traffic driven to your website.
4xx Errors A 4xx error means that a webpage cannot be accessed. This is usually the result of broken links. These errors prevent users and search engine robots from accessing your web pages, and can negatively affect both user experience and search engine Crawlability. This will in turn lead to a drop in traffic driven to your website. Please be aware that the crawler may detect a working link as broken if your website blocks our crawler from accessing it. This usually happens due to the following reasons: DDoS protection system Overloaded or misconfigured server
Missing Page Title Tags A <title> tag is a key on-page SEO element. It appears in browsers and search results and helps both search engines and users understand what your page is about. Max Title Length < 70 characters and keyword rich Missing page title, or a <title> tag is empty, Google may consider it low quality. In case you promote this page in search results, you will miss chances to rank high and gain a higher click-through rate.
Duplicate Title Tags Our crawler reports pages that have duplicate title tags only if they are exact matches. Duplicate <title> tags make it difficult for search engines to determine which of a website's pages is relevant for a specific search query, and which one should be prioritised in search results. Pages with duplicate titles have a lower chance of ranking well and are at risk of being banned. Moreover, identical <title> tags confuse users as to which webpage they should follow.
Duplicate Content Webpages are considered duplicates if their content is 85% identical. Having duplicate content may significantly affect your SEO performance. First of all, Google will typically show only one duplicate page, filtering other instances out of its index and search results, and this page may not be the one you want to rank. In some cases, search engines may consider duplicate pages as an attempt to manipulate search engine rankings and, as a result, your website may be downgraded or even banned from search results. Moreover, duplicate pages may dilute your link profile.
Broken Internal Links Broken internal links lead users from one website to another and bring them to non-existent web pages. Multiple broken links negatively affect user experience and may worsen your search engine rankings because crawlers may think that your website is poorly maintained or coded. Please note that our crawler may detect a working link as broken. Generally, this happens if the server hosting the website you're referring to blocks our crawler from accessing this website.
Pages Not Crawled This issue indicates that our crawler couldn't access the webpage. There are two possible reasons: Your site's server response time is more than 5 seconds Your server refused access to your webpages
DNS Resolution Issues A DNS resolution error is reported when our crawler can't resolve the hostname when trying to access your webpage.
Page URL Learn Page URL Consistency with the Page Title, Page Headings and Page Description, and consistency with page content. Page URL should be match page topic or keyword
Broken internal images An internal broken image is an image that can't be displayed because it no longer exists, its URL is misspelt, or because the file path is not valid. Broken images may jeopardise your search rankings because they provide a poor user experience and signal to search engines that your page is low quality.
Duplicate meta descriptions Our crawler reports pages that have duplicate meta descriptions only if they are exact matches. A <meta description> tag is a short summary of a webpage's content that helps search engines understand what the page is about and can be shown to users in search results. Duplicate meta descriptions on different pages mean a lost opportunity to use more relevant keywords. Also, duplicate meta descriptions make it difficult for search engines and users to differentiate between different web pages. It is better to have no meta description at all than to have a duplicate one.
Invalid robots.txt format If your robots.txt file is poorly configured, it can cause you a lot of problems. Web Pages that you want to be promoted in search results may not be indexed by search engines, while some of your private content may be exposed to users. So, one configuration mistake can damage your search rankings, ruining all your search engine optimization efforts.
Invalid sitemap.xml format If your sitemap.xml file has any errors, search engines will not be able to process the data it contains, and they will ignore it.
Incorrect pages found in sitemap.xml A sitemap.xml file makes it easier for crawlers to discover the pages on your website. Only good pages intended for your visitors should be included in your sitemap.xml file. This error is triggered if your sitemap.xml contains URLs that: lead to web pages with the same content redirect to a different webpage return non-200 status code Populating your file with such URLs will confuse search engines, cause unnecessary crawling or may even result in your sitemap being rejected.
www resolve issues Normally, a webpage can be accessed with or without adding www to its domain name. If you haven’t specified which version should be prioritised, search engines will crawl both versions, and the link juice will be split between them. Therefore, none of your page versions will get high positions in search results.
Viewport not configured The viewport meta tag is an HTML tag that allows you to control a page's viewport size and scale on mobile devices. This tag is indispensable if you want to make your website accessible and optimised for mobile devices. For more information about the viewport meta tag, please see the Responsive Web Design Basics article.
Large HTML page size A web page’s HTML size is the size of all HTML code contained on it. A page size that is too large (i.e., exceeding 2 MB) leads to a slower page load time, resulting in a poor user experience and a lower search engine ranking.
Missing canonical tags in AMP pages This issue is triggered if your AMP page has no canonical tag. When creating AMP pages, several requirements should be met: If you have both an AMP and a non-AMP version of the same page, you should place canonical tags on both versions to prevent duplicate content issues If you have only an AMP version of your webpage, it must have a self-referential canonical tag For more information, please see these articles: AMP on Google Search guidelines and ABC of Fixing AMP Validation Errors With Semrush
Issues with hreflang values This issue is triggered if: Your country code is not in the ISO_3166-1_alpha-2 format Your language code is not in the ISO 639-1 format A hreflang (rel="alternate" hreflang="x") attribute helps search engines understand which page should be shown to visitors based on their location. Utilising this attribute is necessary if you're running a multilingual website and would like to help users from other countries find your content in the language that is most appropriate to them. It is very important to properly implement hreflang attributes, otherwise, search engines will not be able to show the correct language version of your page to the relevant audience. For more information, please see these articles: Tell Google about localised versions of your page, How to Do International SEO with Semrush and Hreflang Attribute 101
Hreflang conflicts within page source code If you're running a multilingual website, it is necessary to help users from other countries find your content in the language that is most appropriate for them. This is where the hreflang (rel="alternate" hreflang="x") attribute comes in handy. This attribute helps search engines understand which page should be shown to visitors based on their location. It is very important to properly synchronise your hreflang attributes within your page's source code, otherwise, you may experience unexpected search engine behaviour. For more information, see this article
Issues with incorrect hreflang links A hreflang (rel="alternate" hreflang="x") attribute helps search engines understand which page should be shown to visitors based on their location. Utilising this attribute is necessary if you're running a multilingual website and would like to help users from other countries find your content in the language that is most appropriate for them. It is very important to make sure your hreflang links always refer to absolute URLs with HTTP 200 status codes, otherwise search engines will not be able to interpret them correctly and, as a result, will not show the correct language version of your pages to the relevant audience. For more information, please see these articles: Tell Google about localised versions of your page,
Non-secure pages This issue is triggered if our crawler detects an HTTP page with a <input type="password"> field. Using a <input type="password"> field on your HTTP page is harmful to user security, as there is a high risk that user login credentials can be stolen. To protect users' sensitive information from being compromised, Google Chrome will start informing users about the dangers of submitting their passwords on HTTP pages by labelling such pages as "non-secure" starting January 2017. This could have a negative impact on your bounce rate, as users will most likely feel uncomfortable and leave your page as quickly as possible.
Certificate Expiration This issue is triggered if your certificate has expired or will expire soon. If you allow your certificate to expire, users accessing your website will be presented with a warning message, which usually stops them from going further and may lead to a drop in your organic search traffic.
Old security protocol version Running SSL or old TLS protocol (version 1.0) is a security risk, which is why it is strongly recommended that you implement the newest protocol versions.
Certificate registered to incorrect name If the domain or subdomain name to which your SSL certificate is registered doesn't match the name displayed in the address bar, web browsers will block users from visiting your website by showing them a name mismatch error, and this will in turn negatively affect your organic search traffic.
Issues with mixed content If your website contains any elements that are not secured with HTTPS, this may lead to security issues. Moreover, browsers will warn users about loading insecure content, and this may negatively affect user experience and reduce their confidence in your website.
Neither canonical URL nor 301 redirects from HTTP homepage If you're running both HTTP and HTTPS versions of your homepage, it is very important to make sure that their coexistence doesn't impede your SEO. Search engines are not able to figure out which page to index and which one to prioritise in search results. As a result, you may experience a lot of problems, including pages competing with each other, traffic loss and poor placement in search results. To avoid these issues, you must instruct search engines to only index the HTTPS version.
Redirect chains and loops Redirecting one URL to another is appropriate in many situations. However, if redirects are done incorrectly, it can lead to disastrous results. Two common examples of improper redirect usage are redirected chains and loops. Long redirect chains and infinite loops lead to a number of problems that can damage your SEO efforts. They make it difficult for search engines to crawl your site, which affects your crawl budget usage and how well your web pages are indexed, slows down your site's load speed, and, as a result, may have a negative impact on your rankings and user experience. Please note that if you can’t spot a redirect chain with your browser, but it is reported in your Site Audit report, your website probably responds to crawlers’ and browsers’ requests differently, and you still need to fix the issue.
Broken canonical URLs By setting a rel="canonical" element on your page, you can inform search engines of which version of a page you want to show up in search results. When using canonical tags, it is important to make sure that the URL you include in your rel="canonical" element leads to a page that actually exists. Canonical links that lead to non-existent web pages complicate the process of crawling and indexing your content and, as a result, decrease crawling efficiency and lead to unnecessary crawl budget waste.

SEO Warning Tests

Warning Warning Description
Broken external links Broken external links lead users from one website to another and bring them to non-existent web pages. Multiple broken links negatively affect user experience and may worsen your search engine rankings because crawlers may think that your website is poorly maintained or coded. Please note that our crawler may detect a working link as broken. Generally, this happens if the server hosting the website you're referring to blocks our crawler from accessing this website.
Links lead to HTTP pages for HTTPS site If any link on the website points to the old HTTP version of the website, search engines can become confused as to which version of the page they should rank.
Short title element Generally, using short titles on web pages is a recommended practice. However, keep in mind that titles containing 10 characters or less do not provide enough information about what your webpage is about and limit your page's potential to show up in search results for different keywords. For more information, please see this Google article.
Long title element Most search engines truncate titles containing more than 70 characters. Incomplete and shortened titles look unappealing to users and won't entice them to click on your page. For more information, please see this Google article
Missing h1 While less important than <title> tags, h1 headings still help define your page’s topic for search engines and users. If a <h1> tag is empty or missing, search engines may place your page lower than they would otherwise. Besides, a lack of a <h1> tag breaks your page’s heading hierarchy, which is not SEO friendly. Para Heading Structure should follow Paragraph heading structures - H1 – H5 Tags - all keyword rich
Duplicate content in h1 and title H1 - Keyword consistency but not duplicate your title tag content in your first-level header. If your page's <title> and <h1> tags match, the latter may appear over-optimized to search engines. Also, using the same content in titles and headers means a lost opportunity to incorporate other relevant keywords for your page. For more information, please see this Google article
Missing meta description Though meta descriptions don't have a direct influence on rankings, they are used by search engines to display your page's description in search results. A page’s meta description should be between 70-160 characters and target user engagement not SEO A good description helps users know what your page is about and encourages them to click on it. If your page's meta description tag is missing, search engines will usually display its first sentence, which may be irrelevant and unappealing to users. For more information, please see these articles: Create good titles and snippets in Search Results
Too many on-page links This issue is triggered if a webpage contains more than 3,000 links. As a rule, search engines process as many on-page links as they consider necessary for a particular website. However, placing more than 3,000 links on a webpage can make your page look low-quality and even spammy to search engines, which may cause your page to drop in rankings or not to show up in search results at all. Having too many on-page links is also bad for the user experience.
Temporary redirects Temporary redirects (i.e., a 302 and a 307 redirect) mean that a page has been temporarily moved to a new location. Search engines will continue to index the redirected page, and no link juice or traffic is passed to the new page, which is why temporary redirects can damage your search rankings if used by mistake.
Missing ALT attributes Alt attributes within <img> tags are used by search engines to understand the contents of your images. If you neglect alt attributes, you may miss the chance to get better placement in search results because alt attributes allow you to rank in image search results. Not using alt attributes also negatively affects the experience of visually impaired users and those who have disabled images in their browsers. For more information, please see these articles: Using ALT attributes smartly and Google Image Publishing Guidelines
Low text to HTML ratio Your text to HTML ratio indicates the amount of actual text you have on your webpage compared to the amount of code. This issue is triggered when your text to HTML is 10% or less. Search engines have begun focusing on pages that contain more content. That's why a higher text to HTML ratio means your page has a better chance of getting a good position in search results. Less code increases your page's load speed and also helps your rankings. It also helps search engine robots crawl your website faster.
Too many URL parameters Using too many URL parameters is not an SEO-friendly approach. Multiple parameters make URLs less enticing for users to click and may cause search engines to fail to index some of your most important pages.
Missing hreflang and lang attributes This issue is reported if your page has neither lang nor hreflang attribute. When running a multilingual website, you should make sure that you’re doing it correctly. First, you should use a hreflang attribute to indicate to Google which pages should be shown to visitors based on their location. That way, you can rest assured that your users will always land on the correct language version of your website. You should also declare a language for your webpage’s content (i.e., lang attribute). Otherwise, your web text might not be recognized by search engines. It also may not appear in search results or may be displayed incorrectly.
Encoding not declared Providing a character encoding tells web browsers which set of characters must be used to display a webpage’s content. If a character encoding is not specified, browsers may not render the page content properly, which may result in a negative user experience. Moreover, search engines may consider pages without a character encoding to be of little help to users and, therefore, place them lower in search results than those with a specified encoding.
Doctype not declared A webpage’s doctype instructs web browsers which version of HTML or XHTML is being used. Declaring a doctype is extremely important in order for a page’s content to load properly. If no doctype is specified, this may lead to various problems, such as messed up page content or slow page load speed, and, as a result, negatively affect user experience.
Low word count This issue is triggered if the number of words on your webpage is less than 200. The amount of text placed on your webpage is a quality signal to search engines. Search engines prefer to provide as much information to users as possible, so pages with longer content tend to be placed higher in search results, as opposed to those with lower word counts. For more information, please view this video
Incompatible plugins used This issue is triggered if your page has content based on Flash, JavaApplet, or Silverlight plugins. These types of plugins do not work properly on mobile devices, which frustrates users. Moreover, they cannot be crawled and indexed properly, negatively impacting your website’s mobile rankings.
Frames used <frame> tags are considered to be one of the most significant search engine optimization issues. Not only is it difficult for search engines to index and crawl content within <frame> tags, which may, in turn, lead to your page being excluded from search results, using these tags also negatively affects user experience.
Underscores in URL When it comes to URL structure, using underscores as word separators is not recommended because search engines may not interpret them correctly and may consider them to be a part of a word. Using hyphens instead of underscores makes it easier for search engines to understand what your page is about. Although using underscores doesn't have a huge impact on web page visibility, it decreases your page's chances of appearing in search results, as opposed to when hyphens are used. For more information, please see this Google article
Nofollow attributes in outgoing internal links A nofollow attribute is an element in an <a> tag that tells crawlers not to follow the link. "Nofollow" links don’t pass any link juice or anchor texts to referred webpages. The unintentional use of nofollow attributes may have a negative impact on the crawling process and your rankings.
Sitemap.xml not specified in robots.txt If you have both a sitemap.xml and a robots.txt file on your website, it is a good practice to place a link to your sitemap.xml in your robots.txt, which will allow search engines to better understand what content they should crawl.
Sitemap.xml not found A sitemap.xml file is used to list all URLs available for crawling. It can also include additional data about each URL. Using a sitemap.xml file is quite beneficial. Not only does it provide easier navigation and better visibility to search engines, it also quickly informs search engines about any new or updated content on your website. Therefore, your website will be crawled faster and more intelligently.
HTTP encryption not used Google considers a website's security as a ranking factor. Websites that do not support HTTPS connections may be less prominent in Google's search results, while HTTPS-protected sites will rank higher with its search algorithms. For more information, see this Google article
No SNI support One of the common issues you may face when using HTTPS is when your web server doesn't support Server Name Indication (SNI). Using SNI allows you to support multiple servers and host multiple certificates at the same IP address, which may improve security and trust.
HTTP URLs in sitemap.xml for HTTPS site Your sitemap.xml should include the links that you want search engines to find and index. Using different URL versions in your sitemap could be misleading to search engines and may result in an incomplete crawling of your website.
Uncompressed pages This issue is triggered if the Content-Encoding entity is not present in the response header. Page compression is essential to the process of optimising your website. Using uncompressed pages leads to a slower page load time, resulting in a poor user experience and a lower search engine ranking.
Blocked internal resources in robots.txt Blocked resources are resources (e.g., CSS, JavaScript, image files, etc.) that are blocked from crawling by a "Disallow" directive in your robots.txt file. By disallowing these files, you're preventing search engines from accessing them and, as a result, properly rendering and indexing your web pages. This, in return, may lead to lower rankings. For more information, please see this article.
Uncompressed JavaScript and CSS files This issue is triggered if compression is not enabled in the HTTP response. Compressing JavaScript and CSS files significantly reduces their size as well as the overall size of your webpage, thus improving your page load time. Uncompressed JavaScript and CSS files make your page load slower, which negatively affects user experience and may worsen your search engine rankings. If your webpage uses uncompressed CSS and JS files that are hosted on an external site, you should make sure they do not affect your page's load time. For more information, please see this Google article.
Uncached JavaScript and CSS files This issue is triggered if browser caching is not specified in the response header. Enabling browser caching for JavaScript and CSS files allows browsers to store and reuse these resources without having to download them again when requesting your page. That way the browser will download less data, which will decrease your page load time. And the less time it takes to load your page, the happier your visitors are. For more information, please see this Google article.

SEO Notices

Notice Notice Description
Multiple h1 tags It is a bad idea to duplicate your title tag content in your first-level header. If your page's <title> and <h1> tags match, the latter may appear over-optimized to search engines. Also, using the same content in titles and headers means a lost opportunity to incorporate other relevant keywords for your page. For more information, please see this Google article
Blocked from crawling If a page cannot be accessed by search engines, it will never appear in search results. A page can be blocked from crawling either by a robots.txt file or a noindex meta tag.
URLs longer than 200 characters According to Google, long URLs are not SEO friendly. Excessive URL length intimidates users and discourages them from clicking or sharing it, thus hurting your page's click-through rate and usability.
Nofollow attributes in outgoing external links A nofollow attribute is an element in a <a> tag that tells crawlers not to follow the link. "Nofollow" links don’t pass any link juice or anchor texts to referred webpages. The unintentional use of nofollow attributes may have a negative impact on the crawling process and your rankings.
Robots.txt not found A robots.txt file has an important impact on your overall SEO website's performance. This file helps search engines determine what content on your website they should crawl. Utilising a robots.txt file can cut the time search engine robots spend crawling and indexing your website. For more information, please see this Google article
Hreflang language mismatch issues This issue is triggered if a language value specified in a hreflang attribute doesn't match your page's language, which is determined based on semantic analysis. Any mistakes in hreflang attributes may confuse search engines, and your hreflang attributes will most likely be interpreted incorrectly. So it's worth taking the time to make sure you don't have any issues with hreflang attributes. For more information, see these articles: Tell Google about localised versions of your page,
No HSTS support HTTP Strict Transport Security (HSTS) informs web browsers that they can communicate with servers only through HTTPS connections. So, to ensure that you don't serve unsecured content to your audience, we recommend that you implement HSTS support.
Orphaned pages (Google Analytics) A webpage that is not linked to internally is called an orphaned page. It is very important to check your website for such pages. If a page has valuable content but is not linked to by another page on your website, it can miss out on the opportunity to receive enough link juice. Orphaned pages that no longer serve their purpose confuse your users and, as a result, negatively affect their experience. We identify orphaned pages on your website by comparing the number of pages we crawled to the number of pages in your Google Analytics account. That's why to check your website for any orphaned pages, you need to connect your Google Analytics account.
Orphaned sitemap pages An orphaned page is a web page that is not linked internally. Including orphaned pages in your sitemap.xml files is considered to be a bad practice, as these pages will be crawled by search engines. Crawling outdated orphaned pages will waste your crawl budget. If an orphaned page in your sitemap.xml file has valuable content, we recommend that you link to it internally.
Slow avg. document interactive time We all know that slow page-load speed negatively affects user experience. However, if a user can start interacting with your webpage within 1 second, they are much less likely to click away from this page. That's why it is important to keep a close eye on the time it takes your most important webpages to become usable, known as the Average Document Interactive Time. For more information, please see Why Performance Matters To evaluate your site performance, use the Site Performance report.
Blocked by X-Robots-Tag: noindex HTTP header The x-robots-tag is an HTTP header that can be used to instruct search engines whether or not they can index or crawl a web page. This tag supports the same directives as a regular meta robots tag and is typically used to control the crawling of non-HTML files. If a page is blocked from crawling with x-robots-tag, it will never appear in search results.
Blocked external resources in robots.txt Blocked external resources are resources (e.g., CSS, JavaScript, image files, etc.) that are hosted on an external website and blocked from crawling by a "Disallow" directive in an external robots.txt file. Disallowing these files may prevent search engines from accessing them and, as a result, properly rendering and indexing your web pages. This, in return, may lead to lower rankings. For more information, please see this article.
Broken external JavaScript and CSS files If your website uses JavaScript or CSS files that are hosted on an external site, you should be sure that they work properly. Any script that has stopped running on your website may jeopardise your rankings since search engines will not be able to properly render and index your web pages. Moreover, broken JavaScript and CSS files may cause website errors, and this will certainly spoil your user experience.
Page Crawl Depth more than 3 clicks A page's crawl depth is the number of clicks required for users and search engine crawlers to reach it via its corresponding homepage. From an SEO perspective, an excessive crawl depth may pose a great threat to your optimization efforts, as both crawlers and users are less likely to reach deep pages. For this reason, pages that contain important content should be no more than 3 clicks away from your homepage.
Pages with only one internal link Having very few incoming internal links means very few visits, or even none, and fewer chances of placing in search results. It is a good practice to add more incoming internal links to pages with useful content. That way, you can rest assured that users and search engines will never miss them
Permanent redirects Although using permanent redirects (a 301 or 308 redirect) is appropriate in many situations (for example, when you move a website to a new domain, redirect users from a deleted page to a new one, or handle duplicate content issues), we recommend that you keep them to a reasonable minimum. Every time you redirect one of your website's pages, it decreases your crawl budget, which may run out before search engines can crawl the page you want to be indexed. Moreover, too many permanent redirects can be confusing to users.
Resources formatted as page links We detected that some links to resources are formatted with <a href> HTML element. A <a> tag with a href attribute is used to link to other webpages and must only contain a page URL. Search engines will crawl your site from page to page by following these HTML page links. When following a page link that contains a resource, for example, an image, the returned page will not contain anything except an image. This may confuse search engines and will indicate that your site has poor architecture.
Links with no anchor text This issue is triggered if a link (either external or internal) on your website has an empty or naked anchor (i.e., anchor that uses a raw URL), or anchor text only contains symbols. Although a missing anchor doesn't prevent users and crawlers from following a link, it makes it difficult to understand what the page you're linking to is about. Also, Google considers anchor text when indexing a page. So, a missing anchor represents a lost opportunity to optimise the performance of the linked-to page in search results.
Links with non-descriptive anchor text This issue is triggered if a non-descriptive anchor text is used for a link (either internal or external). An anchor is considered to be non-descriptive if it doesn’t give any idea of what the linked-to page is about, for example, “click here”, “right here”, etc. This type of anchor provides little value to users and search engines as it doesn't provide any information about the target page. Also, such anchors will offer little in terms of the target page’s ability to be indexed by search engines, and as a result, rank for relevant search requests.
External pages or resources with 403 HTTP status code This issue is triggered if a crawler gets a 403 code when trying to access an external webpage or resource via a link on your site. A 403 HTTP status code is returned if a user is not allowed to access the resource for some reason. In the case of crawlers, this usually means that a crawler is being blocked from accessing content at the server level.
Share by: