Skip links

When To Use NOINDEX Or Robots.txt?

One common question that most of the users ask is how the NOINDEX robots meta tag is different from the robots.txt. Our blog contains all the relevant information about when should you use Robots.txt or NOINDEX.

The NOINDEX Robots meta tag

The NOINDEX tag works to inhibit content from appearing in any of the search results. The content has a source code in which the NOINDEX meta tag appears. 

Simply, it informs the search engine about removing or not including your content from the search results. The NOINDEX robots meta tag appears like this when you open the source code of your page in case it is present.

 <meta name=”robots” content= “noindex”/>

The Robots.txt file

The robots.txt file notifies search engines where their crawlers can go on their website and where they cannot go. It has two directives “Allow” and “Disallow”. It guides search engines about which files or directories to crawl or not. 

Do not worry about your content listing in search results with the robots.txt file. It does not inhibit your content to appear in the search results. 

Search engines can crawl even any blocked file or directory if it is linked to the page on your website or others. For example, if you are using robots.txt, you are guiding the search engines about not crawling the directory i.e. “/cgi-bin/”.  

The directory may be present on your server and not useful for search engines. Here is how the default robots.txt for WordPress appears:

User-agent: *

Disallow: /wp-admin/

Allow: /wp-admin/admin-ajax.php

Dissimilarities Between NOINDEX and Robots.txt

The major difference is that if you do not want search engines to add your content to search results, then you should choose the NOINDEX tag. In this case, you have to allow search engines for content crawling. 

If search engines are unable to crawl the content, it means they cannot approach the NOINDEX meta tag. Therefore, they cannot eliminate your content from the search results. 

So if you do not want your content to appear in search results then always prefer to use NOINDEX. If you want search engines to not crawl a directory on the server you are using you should go for the “Disallow” directive in the robots.txt file.

People do it if the directory is not useful or there is nothing productive to see in the directory. It saves the image of your website and it seems more productive when users find only the useful things on your website.

Working Of Robots.txt Noindex

Including NOINDEX directives in the robots.txt file has been beneficial as a supporting feature during the last ten years. Although Google has not documented it using official ways it is still a supporting feature. 

NOINDEX pages do not show in the search results because they do not cease in the index, unlike the pages that are disallowed. If you want to optimize the crawl productivity then combine disallow in robots.txt and NOINDEX.

NOINDEX directive inhibits the page from appearing in the search results and the disallow option halts it from crawling. Here are examples:

Disallow: /example-page-1/

Disallow: /example-page-2

Noindex: /example-page-1/

Noindex: /example-page-2/

Update For Unsupported Rules

Google reported on July 1st, 2019 that Robts Exclusion Protocol (REP) became an Internet standard. It is now an open source to use. 

On 2nd July, Google followed it and an official note was issued on the rules that are unsupported in the robots’ files. The announcement revealed that Google will not assist the NOINDEX used in the robots.txt. 

Gary Illyes described that Google found several websites hurting themselves when they were running analysis using NOINDEX in robots.txt files.

He added that the update is for the advantage of the ecosystem and the people who want to use it accurately. They will find some good ways to achieve their site tasks after getting information from this update. 

Alternatives To The NOINDEX Directive

Several alternatives are available for people who are using the NOINDEX directive within their robots.txt file. If you do not want to rely on it, do not need to worry now. 

An official blog post is published by Google in which the alternative options to NOINDEX directive are available. The list is mentioned here:

  • NOINDEX Robots Meta Tags

If you want to exclude URLs from the index, it is the best option for you to consider. But crawling is allowed here.  Both HTML and HTTP headers support these tags and it is achieved by the addition of the meta robots NOINDEX directive on your web page. This addition occurs itself.

  • Status Codes Of 404 And 401 HTTP

These status codes help to give information to search engines regarding page existence if it does not exist. It can drop them after crawling from the index.

  • Protection Of Password

A page is usually removed from the index when you try to hide it to inhibit Google from approaching it. Users hide it behind the login which leads to page removal. 

  • Disallow In Robots.txt 

Pages are not indexed if they are blocked from crawling. Search engines can index the pages about which they have information. If a page is indexed due to other links that are pointing to it, Google does not make these pages visible in search results.

  • Search Console Eliminate URL Tool

If you want to remove any URL from the search results of Google for a short time, Google Search Console can do it for you.

Monitoring And Recognizing NOINDEX Robots.txt Pages

NOINDEX pages allow the user to get information about how and which pages are NOINDEXED. You will find a list of NOINDEXED pages.

Now, using this list, find out where the pages are NOINDEXED. If you want some pages to be indexed, you can easily identify them using the NOINDEX. 

It is easy to remove the page from NOINDEX once you are able to identify it. Use the testing tool of Search Console to get information about the working of the NOINDEX directive. 

All in One SEO And Robots.txt Tool

Many users want to customize robots.txt for their website. We will also guide you about how can you customize it for your site. WordPress works to create robots.txt. 

All in One SEO has a module of robots.txt. This module helps you to control robots.txt. The best thing about this module is, it makes you able to control the instructions you provide web crawlers that are related to your site.

Robots.txt In WordPress

Firstly, you have to know that WordPress creates an effective robots.txt for every site of it. This robots.txt is the default and has all the rules for sites that are currently running on WordPress. 

Secondly, your server will not have any static files because of the effective robots.txt that WordPress creates. Check your WordPress database if you want to know where your robots.txt content is saved. 

The content appears in the web browser. This is completely normal and more beneficial than using any file for your server that is physical. 

One important thing you must know about is All in One SEO works to provide you with a way of adding custom rules to the default robots.txt that WordPress creates. 

If you want All in One SEO to create rules for you, then it cannot do it for you. WordPress generates certain custom rules that you can only add to default robots.txt using All in One SEO.

Users should be careful while generating the large robots.txt. The two main reasons for being careful are the following:

  • You can face difficulty in managing the large robots.txt because it has complex rules. It is sometimes difficult for users to handle all of the complex rules. 
  • Google suggested the size of 512KB to use maximally to reduce server strains. It also ensures a long-term connection. 

Robots.txt Editor In All in One SEO 

Get started after clicking on the button of tools in the menu of All in One SEO. Now, Robots.txt will appear where you can see the Enable Custom Robots.txt Editor. If you want to enable it, click the toggle.

Make sure to enable the Custom Robots.txt when you have a valid reason to include a custom rule in it. The default robots.txt created by WordPress is perfect and best to use for 99% of all sites.

The Custom Robots.txt feature is available for users who require custom rules. Go to the screen bottom and a preview option of Custom Robots.txt will appear. Here, you will get the default rules that WordPress has added.

Conclusion

If you want your content to not appear in any of the search results, use the NOINDEX tag. We suggest you choose this option if you want any of your content to disappear in the searches. 

The robots.txt file is good for helping the search engine to identify where the crawlers can go and where they are unable to go. It assists search engines with the directories that are to crawl and the ones that should not. 

Leave a comment