Here is a simple but helpful tip to improve your site’s ranking in a Google search, or SEO, get a robots.txt file. These small files are immensely helpful when it comes to helping your site get higher in the rankings on your favorite search engine. For most of us, that is Google, but we don’t judge.
What is a Robots.txt File?
This file is very simple. It only takes two lines to operate. The purpose of a robots.txt file is for a tool used by search engines called a web crawler. They, quite literally, crawl the web and websites and it helps the search algorithms rank what users want when they search. When your site has a robots.txt file in its file system, this gives those crawlers special instructions as to how they should crawl the site. Basically, the file was built by you, but is used only by robots.
What is in the File?
This file only needs to be two lines. Here is all you need to make your robots.txt file:
User-agent: [user-agent name] Disallow: [URL string not to be crawled]
In a normal file, the disallow line would be empty, because you want the crawler to see your whole site. The user agent is almost always the asterisk character (*). Unless of course you want to block a certain bot like the googlebot or the bingbot. Your entire site will then be crawled with the instructions given by this file, almost like giving them a sitemap. Be warned, do not use a robots.txt file, and the disallow feature, to hide your web pages from Google search results.
I hope this has assisted you as a webmaster and convinced you why it is important to contain this file in your website’s directory. As another final tip, don’t name it Robots.txt or it won’t be found. Additionally, if you have subdomains, you must create new robots.txt files for them if you want them to get the same kind of benefit as your TLD.