6 things that happen when Googlebot can’t crawl your website

Posted On By
0 0
Read Time:1 Minute, 14 Second

Ever wondered what would happen if you prevented Google from crawling your website for a few weeks? Technical SEO expert Kristina Azarenko has published the results of such an experiment.

Six surprising things that happened. What happened when Googlebot couldn’t crawl Azarenko’s site from Oct 5 to Nov. 7:

Favicon was removed from Google Search results.

Video search results took a big hit and still haven’t recovered post-experiment.

Positions remained relatively stable, except were slightly more volatile in Canada.

Traffic only saw only a slight decrease.

An increase in reported indexed pages in Google Search Console. Why? Pages with noindex meta robots tags ended up being indexed because Google couldn’t crawl the site to see those tags.

Multiple alerts in GSC (e.g., “Indexed, though blocked by robots.txt”, “Blocked by robots.txt”).

Why we care. Testing is a crucial element of SEO. All changes (intentional or unintentional) can impact your rankings and traffic and bottom line, so it’s good to understand how Google could possibly react. Also, most companies aren’t able to attempt this sort of an experiment, so this is good information to know.

The experiment. You can read all about it in Unexpected Results of My Google Crawling Experiment.

Another similar experiment. Patrick Stox of Ahrefs has also shared results of blocking two high-ranking pages with robots.txt for five months. The impact on ranking was minimal, but the pages lost all their featured snippets.

About Post Author

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *