Quick PresetsMultiple User-AgentsCopy & Download
Preview
User-agent: * Allow: /
How It Works
1
Choose a Preset or Build
Start with a preset or add user-agent sections manually.
2
Add Rules
Set allow/disallow paths, crawl-delay, and sitemap URL.
3
Copy or Download
Copy the generated robots.txt or download the file.
Frequently Asked Questions
What is a robots.txt file?
A robots.txt file tells search engine crawlers which pages or files they can or cannot request from your site. It is placed in the root directory of your website (e.g., example.com/robots.txt).
What does User-agent * mean?
The asterisk (*) is a wildcard that matches all web crawlers. Rules under "User-agent: *" apply to every bot unless a more specific user-agent section overrides them.
Does robots.txt block pages from appearing in search results?
Not necessarily. While Disallow prevents crawling, a page can still appear in search results if other sites link to it. To truly prevent indexing, use a "noindex" meta tag or X-Robots-Tag HTTP header.
What is Crawl-delay?
Crawl-delay tells compliant bots to wait a specified number of seconds between requests. This can reduce server load from aggressive crawlers. Note that Googlebot ignores Crawl-delay; use Google Search Console instead.
Part of these workflows
Love this tool? Explore 999+ more
Free online tools for images, PDFs, text, code, and more. All running in your browser.
Explore All Tools