Crawling
The process of search engine bots discovering and downloading pages from your website. If a page can't be crawled, it can't be indexed. If it can't be indexed, it can't rank.
Why It Matters
Crawling is step one in the entire search process. Before Google can evaluate, index, or rank your content, it needs to find it and download it. If your pages can't be crawled - because of broken links, blocked resources, server errors, or architectural problems - they simply don't exist in Google's world.
Understanding how crawling works is the foundation of technical SEO. Everything else builds on it.
In Practice
Make sure Google can reach your important pages. Check robots.txt to confirm you're not accidentally blocking critical content. Use internal links to connect your pages - orphan pages with no links pointing to them are hard for crawlers to discover.
Submit an XML sitemap through Google Search Console as a roadmap for Googlebot. Monitor the Index Coverage report for crawl errors - 404s, server errors, redirect loops.
Use the URL Inspection tool to see exactly how Google crawls and renders any page on your site. If the rendered version is missing content, you have a JavaScript rendering issue.
Related Terms
Glossary
Googlebot
Google's web crawler that discovers, downloads, and indexes your pages.
Glossary
Indexing
Adding a crawled page to Google's database so it can appear in search results.
Glossary
Crawl Budget
How many pages Google will crawl on your site in a given timeframe.
Glossary
Robots.txt
A file telling search engine crawlers which parts of your site they can access.
Glossary
Technical SEO
Optimising your site's infrastructure so search engines can crawl, render, and index it.
Know the Words.
Now See Them in Action.
Free teardown. No jargon. Just what's broken and how to fix it.
Get The Teardown