ClaudeBot is a web crawler used to download training data for training LLMs (Large Language Models). This bot is operated by Anthropic, the company that runs Claude.ai.
ClaudeBot/1.0; +claudebot@anthropic.com
Should you block CaludeBot or limit its access, and how can you do that? Find out more in this article.
Fast Scraping
I have seen this bot crawling at excessive speeds, even causing availability and performance problems. There are also posts on Reddit claiming the same.

iFixit
A post from Kyle Wiens (CEO of iFixit) went somewhat viral after iFixit was heavily scraped. Their terms of service specifically prohibit such actions. However, their robots.txt file was only updated during the massive scraping incident.

As an operator, it is nearly impossible to keep track of all these changes in the Bot landscape and decide to address each one individually in the robots.txt file. Therefore, I believe that iFixit’s response is exactly how this issue should be handled from a technical standpoint.
Blocking?
You can block access to your website by user agent. You may want to consider whether your website information should appear in large language models (LLMs) answers in the future. If you prefer not to have your data included in LLM answers, then blocking might be a legitimate option. There is also a risk that users will search less and less on traditional search engines. Therefore, having your data in an LLM could be both beneficial, neutral, or detrimental in the future, and for now, no one knows which way it will go.
Block AI Scrapers and Crawlers on Cloudflare
Cloudflare is pushing ahead in this space, allowing you to one-click block AI scrapers and crawlers.

If you’d like to enable this, just go to the Security > Bots section and select ‘Block bots from scraping your content for AI applications like model training’. Additional reading is available on the Cloudflare blog post.
Robots.txt
You can signal with the robots.txt file that you do not want ClaudeBot to index or crawl your website.
User-agent: ClaudeBot
Disallow: /
iFixit
The press publicly reported about the bot ignoring the terms of service after a post from Kyle Wiens (CEO of iFixit).
Anthropic’s crawler is ignoring websites’ anti-AI scraping policies

However, we should also consider that the entry in the robots.txt file was only added after they started to encounter issues, not before. While this does not excuse the situation, there is currently no evidence that ClaudeBot is ignoring the robots.txt file.

If you are concerned about this issue or experiencing the same problem, please add them to the robots.txt file as well.
User Agent based block
One of the simplest ways to take action is to block based on the User-Agent header.
ClaudeBot/1.0; +claudebot@anthropic.com
IP based block
At this time, there is no public IP address list from Anthropic that specifies which IP addresses are being used for crawling. Additionally, there is no official PTR record to check against. Therefore, blocking based on IP addresses is not a viable option. However, if your system goes down, processing your access logs and blocking the involved IP addresses may be necessary.
See also DDoS from Anthropic AI