How to set up and fix your Meta Robots Tags & Robots.txt file

seo cover

Learning how to set up meta robots tags and robots.txt tags is crucial to progress in technical SEO.

This article provides insights into how to fix your robots.txt files and meta robots tags files. Let us get started.

 Table of Contents:

Meta Robots Tags vs. Robots.txt

Before we dig deeper into the basics of what meta robots’ tags and robots.txt files are, it is essential to know that there is not one website that is better than the other one for SEO.

Robots.txt files instruct crawlers about the entire website. Meta robots’ tags get into the nitty-gritty of a particular webpage.

I simply prefer to use meta robots’ tags for many elements that other SEO specialists may use to analyze the robots.txt file.

You must know, there is no right or wrong answer. It is a matter of preference based on your past experience.

What Is Robots.txt?

A robots.txt file directs crawlers on what should/can be crawled.

It’s part of the robot’s isolation protocol (REP).

Googlebot is an illustration of a crawler.

Google deploys Googlebot to crawl sites and record data on that site to learn how to rank the location in Google’s search results.

You can find any website’s robots.txt file by adding /robots.txt after the web address like this:


Here is what a raw, fresh, robots.txt file looks like:


Note from LinkSignal: The user-agent informs the crawlers that the robots.txt file is for all bots that come to the site.

The slash / after “Disallow” tells the robot not to go to any other pages on the website.

Here is an excellent example of Moz’s robots.txt file.


You can clearly see they are telling the crawlers what pages to crawl using user-agents and directives.

What Are Meta Robots Tags?

Meta robots’ tags are HTML code fragments that tell search engine crawlers how to index pages and crawl on your site.

The meta robots’ tags are headed to the <head> section of a web page.

Here is a great example:

<meta name=” robots” content=”no index” />

The meta robots’ tags are made of two parts.

The first part of the tag is the name=’’’.

This is where you recognize the user-agent. 

The second part of the tag is the content=’’. This is where you tell the bots what exactly you want them to do.

Why Is Robots.txt Important?

The question that e hearmost of the time is “Why isn’t my website ranking after months of hard work?

Start a New SEO Chapter of 2021

Try SE Ranking’s all-in-one SEO platforms! Get a lot of tools, a wealth of data, no matter how much your budget is. Get free SEO data migration upon subscription.

Your robots.txt file looks like this:

user agent

This will block all web crawlers that are visiting your website.

Another reason robots.txt is essential is that Google has a crawl budget.

So, if you have a big website with low-quality web pages that you do not want Google to crawl, you can tell Google to “Disallow” the pages in your robots.txt file.

This would free up your crawl budget to only crawl the high-quality webpages you want Google to rank you for.

How to Use Meta Robots Tags?

If you are using a WordPress webpage, there are several plugin options for you to tailor your meta robots’ tags.

No matter how your website is built, here are three tips to effectively use meta robots’ tags:

  • Keep it case-sensitive. Search engines identify attributes, values, and parameters in both lowercase and uppercase. I suggest that you stick to lowercase to enhance code readability. Plus, if you are an SEO marketer, it is best to get in the habit of using lowercase.
  • Avoid multiple <meta> tags. Using multiple meta tags will create conflict in code. Use several values in your <meta> tag.
  • Do not use conflicting meta tags to avoid indexing mistakes. For example, if you have several code lines with meta tags, only “nofollow” will be considered. This is because robots put restrictive values first.

Robots.txt Basics

How to Use Robots.txt?

Using robots.txt is essential for SEO success.

It can give you a headache if you do not understand how it works.

Search engines will index and crawl your webpage based on what you tell them in the robots.txt file using expressions and directives.

Below are typical robots.txt directives that you should know for sure:

User-agent: * — This is the first in your robots.txt file to explain to the crawlers the guidelines of what you want them to crawl on your website. 

User-agent: Googlebot — tells only what you want Google’s spider to crawl.

Disallow: / — tells all crawlers not to crawl your entire site.

Disallow: — tells all crawlers to crawl your entire site.

Disallow: /staging/ — tells all crawlers to ignore your staging site.

Disallow: /ebooks/* .pdf — tells crawlers to ignore all your PDF formats, which may cause duplicate content issues.

User-agent: Googlebot

Disallow: /images/ — tells only the Googlebot crawler to ignore all photos on your site.

* — is seen as a wildcard that represents any sequence of characters.

$ — is used to match the end of the URL.

To produce a robots.txt file, I use Yoast for WordPress. It already integrates with other SEO pieces on my websites.


Here are a few basic things that you need to remember:

  • Format your robots.txt carefully. 
  • Make sure that every URL you want is placed on a separate line as Best Buy does below. And do not converse with spacing.
  • Use lowercase to name your robots.txt as WebCEO does.
  • Do not use any specific characters except * and $. 
  • Create different robots.txt files for other subdomains. 
  • Use # to give comments in your robots.txt file. 
  • If a page is not allowed in the robots.txt files, the link property will not pass.
  • Never use the robots.txt to block or protect sensitive data.

Common Robots.txt Mistakes

After running robots.txt files, here are a few of the simple mistakes that I have run into:

Mistake #1: The File Name Contains Upper Case

The only potential file name is robots.txt.

Cling to lowercase when it comes to SEO.

Mistake #2: Not Placing the Robots.Txt File in the Main Directory

If you want your robots.txt file to be located, you have to put it in your site’s leading directory.





Mistake #3: Incorrectly Formatted User-Agent


User-agent: Googlebot

  • Disallow: /


Disallow: Googlebot

Mistake #4: Listing All the Files Within the Directory


User-agent: *

Disallow: /shirts/


User-agent: *

Disallow: /pajamas/flannel.html

Disallow: /pajamas/cashmere.htm.

Mistake #5: No Disallow Instructions

You will need several instructions to disallow. This will help search engine bots to understand your intent.


User-agent: Googlebot


Host: www.mysite.com


User-agent: Googlebot

Host: www.mysite.com

Mistake #6 Blocking Your Entire Site


User-agent: Googlebot



User-agent: Googlebot

Disallow: /

Mistake #7: Using Different Directives in the * Section.


User-agent: *

Disallow: /css/


User-agent: *

Disallow: /css/

Host: www.example.com

Robots.txt & Meta Robots Tags Work Together

One of the most significant errors that I have seen while working, is when the robots.txt file does not connect with what you’ve stated in the meta robots’ tags.

In my practice, Google has given preference to what is prohibited by the robots.txt file.

  • You can reduce non-compliance between robots.txt and meta robots’ tags by clearly showing search engines which pages should be recorded and which should not.

Welcome to Future

If you still remember the days of buying a Blockbuster movie in a strip mall, then the idea of using meta tags or robots.txt may seem unusual for you.

But, if you have watched “Stranger Things,” then welcome to the future, my friend.

Hopefully, this guide provided you with more insight into the basics of meta tags and robots.txt.

If you were hoping for time-traveling machines or robots flying in on jet packs after reading this post, then I am sorry. 

But if you still have some questions, do not hesitate to ask us in the comments section below.

Leave a Comment