Crawl Budget
The number of pages Google will crawl on your site within a given timeframe. Matters most for large sites. If Googlebot wastes its budget on junk URLs, your important pages get crawled less often.
Why It Matters
For most small to medium sites, crawl budget is irrelevant - Google can crawl everything without breaking a sweat. But for sites with tens of thousands of pages or more, it becomes critical.
If Googlebot spends its time crawling filtered product pages, parameter-heavy URLs, old pagination, or duplicate content, your new and updated pages get crawled less frequently. That means slower indexing, slower ranking updates, and missed opportunities.
In Practice
Audit your crawl stats in Google Search Console under Settings > Crawl Stats. Look at what Googlebot is spending its time on. If most of its crawls are hitting low-value pages, you have a crawl budget problem.
Fix it by: blocking junk URLs with robots.txt, cleaning up parameter-based URLs with canonical tags, removing or noindexing thin content pages, and ensuring your XML sitemap only includes pages you want indexed.
Fix crawl errors too. If Googlebot keeps hitting 404s or redirect chains, that's wasted crawl budget.
Related Terms
Glossary
Crawling
How search engine bots discover and download your pages - the first step to ranking.
Glossary
Robots.txt
A file telling search engine crawlers which parts of your site they can access.
Glossary
Sitemap (XML)
An XML file listing all pages you want search engines to discover and index.
Glossary
Googlebot
Google's web crawler that discovers, downloads, and indexes your pages.
Glossary
Indexing
Adding a crawled page to Google's database so it can appear in search results.
Know the Words.
Now See Them in Action.
Free teardown. No jargon. Just what's broken and how to fix it.
Get The Teardown