Where is robots.txt in WordPress?

Search engine optimization (SEO) plays a crucial role in driving organic traffic to your website. To ensure search engines can efficiently crawl and index your site, it’s essential to understand the robots.txt file.

In this blog post, we’ll explore the purpose, location, configuration, best practices, and common mistakes related to the robots.txt file in WordPress.

What is the Robots.txt File?

The robots.txt file is a plain text file that provides instructions to search engine crawlers. It serves as a guide for search engines to determine which parts of your website they should or should not crawl.

By specifying the access permissions for different web crawlers, you can control how your site appears in search engine results.

2 Ways to Find robots.txt file in WordPress

There are two ways to find robots.txt file in WordPress. Let’s continue and learn how to find the robots.txt file in WordPress.

Note: By default, WordPress doesn’t generate a robots.txt file. If you want to edit a robots.txt file, you will first need to create it. You can create a robots.txt file in two ways. First by installing an SEO plugin like “Rank Math” and second by manually creating a file in your file manager.

1. Find robots.txt via Cpanel File Manager

Follow the steps given below to access robots.txt via Cpanel File Manager:

  1. Open your Cpanel
  2. Then open File Manager
  3. Navigate to the root directory of your WordPress installation
  4. Here you will find a “robots.txt” file
  5. If the robots.txt file doesn’t exist, create a new plain text file and name it “robots.txt”.
  6. Open the robots.txt file using a text editor.
  7. Add the desired rules and directives.
  8. Save the robots.txt file.

2. Access robots.txt via WordPress Admin Dashboard

You can also access robots.txt from the WordPress admin dashboard.

Follow these steps:

  1. Go to Plugins and install a new plugin “File Manager”
  2. After installing the plugin, open it
  3. You will find here a file named “robots.txt

How robots.txt Works?

The robots.txt file consists of directives that provide instructions to search engine crawlers. The two most commonly used directives are “User-agent” and “Disallow”.

“User-agent” specifies the search engine crawler to which the directive applies, while “Disallow” indicates the URLs or directories that the crawler should not access.

Here’s an example of a basic robots.txt file:

User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/

In this example, the “User-agent” directive is set to “*” to apply to all search engine crawlers. The subsequent “Disallow” directives disallow access to the “/wp-admin/” and “/wp-includes/” directories, which are typically restricted from crawling.

Best Practices for Robots.txt File Management

When managing your robots.txt file, consider the following best practices:

  1. Familiarize yourself with the robots.txt syntax and directives.
  2. Double-check your robots.txt file to avoid unintended blocks or permissions.
  3. Test your robots.txt file using search engine tools to ensure it functions as intended.
  4. Regularly update the robots.txt file to reflect site changes and new directives.
  5. Avoid blocking essential pages, such as your homepage or important content.

Advanced Robots.txt Configuration

Besides basic directives, you can utilize advanced configurations in your robots.txt file:

  1. “Allow” directive: Use this directive to allow access to specific URLs or directories that would otherwise be disallowed.
  2. “Crawl-delay” directive: Specify the time delay (in seconds) between successive crawler requests to manage server load.
  3. Specifying sitemaps: Include the location of your XML sitemap in the robots.txt file to aid search engine crawling and indexing.

Common Robots.txt Mistakes to Avoid

While the robots.txt file is essential for SEO, mistakes can inadvertently harm your site’s visibility. Avoid these common errors:

  1. Blocking important pages or sections: Double-check your “Disallow” directives to prevent blocking crucial content from search engines.
  2. Syntax errors: Ensure the correct syntax is used for directives, and avoid typos or missing characters that could render the file ineffective.
  3. Using wildcards incorrectly: Be cautious when using wildcards (*) in “User-agent” directives, as they can inadvertently block all search engine crawlers.


The robots.txt file is a vital tool for managing search engine crawlers and optimizing your WordPress site for search visibility.

By understanding its purpose, properly configuring it, and following best practices, you can ensure search engines efficiently crawl and index your website.

Regularly review and update your robots.txt file to adapt to changes in your site’s structure and content. Remember, a well-optimized robots.txt file is a crucial component of your SEO strategy.