Is your website blocking search engine robots?

Enter the website address and find out if search engine robots are blocked by meta tags, robots.txt files or HTTP headers.

Search Engine Robots Blocking

As a website owner, have you ever wondered how search engines find out everything about your webpages even if you haven’t specifically “promoted” them on social media or other websites? Search Engine Robots are how!

Search Engine Robots are tiny programs that land up on websites and follow the links on them to collect information about the website. This information is then added to the databases of such search engines. If you’re wondering why search engines use such robots, well, that’s how they know which websites can provide the information related to the keywords that users search for.

What are Search Engine Robots?

Also known as wanderers, crawlers, bots, and spiders, Search Engine Robots are tools used by popular search engines like Google, Microsoft Bing, and Yahoo to build their databases. These robots automatically visit webpages, navigate them, and separate information to decode the webpage’s description.

A search engine is considered high performing if it delivers the results to user queries quickly. To be able to do that, it should have an extensive database which has information about everything that’s posted across the Internet. Search Engine Robots help it collect such information. These crawlers accumulate details like page headings, meta tags, metadata, and word-based content and feed it to the databases of search engines so that they can offer answers quicker than other competing engines.

What are some common search engine robots?

Some of the common search engine robots include:

  • Googlebot
  • Slurp bot
  • Bing bot
  • Baidu spider
  • DuckDuck bot
  • Exabot
  • Sogou spider
  • Yandex bot
  • Alexa crawler

Why would a website owner block a Search Engine Robot?

Although having search engine robots crawl your website would only help you gain a better rankings with respect to the domain you’re catering to, some website owners host sensitive details on their webpages and want them to be private and safe. This is where the issue of security arises as search engine bots can’t distinguish between public and private web content.

Another reason a website owner would want such bots to stay away from his content is to prevent duplication of the entire website or its content because it will negatively impact his SEO.

Because of these reasons, website owners often want to restrict access to their webpages by banning or blocking crawlers, especially when the website is in staging mode. Staging mode allows one to configure and preview their website before making the server go online, so blocking bots during this process is often suggested.

How to block search engine robots?

There are three ways in which bots can be blocked:

1. Meta tags

Meta tags are short texts that define a site’s content and they appear only in the source code of the page. They make it possible for programmers to keep parameters for crawlers as they block them from indexing the site.

You can block crawlers during website staging by using the following Meta tag in your source code:
<meta name=”robots” content="noindex,nofollow">

Note: Once a website goes active, it’s mandatory to remove this as it will make your webpages non-visible to ALL searchers. Meta tags work best only during the process of setting up a website to avoid the content from being stolen.

2. Robots.txt file

Robots.txt files are plain ASCII text files that limit the access of crawlers to certain sections of the website, for example, specific files and folders. If you want to block some particular data on your website, you can use this method.

To use a robots.txt file, open a new file in Notepad (or any other simple word processor), and type these words to block bots from the entire site:

User-agent: *
Disallow: /

Now save the file in your root directory and make sure the file name is robots.txt (all in lowercase).

3. HTTP headers

The HTTP header, or X-Robots-Tag, is an upgraded version of Meta tags that makes adding them to each page easier. This method allows you to specify and set the value for the entire site at once.

This is the code you can use:
Header set X-Robots-Tag "noindex, nofollow"

Note: Failure to remove this from your website’s source code after your site is up will hide the page from ALL web browsers.

How to check if Search Engine Robots are blocked on a specific URL?

Just like there are three ways to block search engine robots, there are three ways in which you can check if they’re blocked for a website:

  • View the HTML source code of the website to find the Meta tag or X-Robots-Tag
  • Check the contents of the robots.txt file for the website
  • Scan the HTTP headers

If none of the above methods work, you can consider using the free tool above.

  • Follow us

  • Browse Categories



  • Super Monitoring

    Superhero-powered monitoring
    of website or web application
    availability & performance


    Try it out for free

    or learn more about website monitoring
  • Superhero-powered monitoring
    of website or web application
    availability & performance
    Super Monitoring
    or learn more about
    website monitoring