shadow

Elphill Technology Blog

/ Blog Category

Nothing To Worry About Crawl Errors Increase – They Are Just Hungry

digital marketing company in Kolkata
The past few days have witnessed webmasters reporting to increase in crawl error. Google have refurbished the crawl error data and it is mainly registered in a WebmasterWorld thread.

Crawl errors are the issues Googlebot encountered while crawling websites. Mostly it is suspected that it has nothing to do with an algorithm update. However, it is been covered just more than once, that these crawl changes in Google Search Console have no relation to the current algorithm updates according to Google. However, to ensure your website is free from errors, hire digital marketing company in India .

Crawl errors are mostly:

Unfollowed errors are mostly URLs returned either a 301 or 302 due to an issue of URL redirect Googlebot has problems crawling it.

The access denied errors include 401, 403 and 407. Some of these showing up as “other” were a bug that has since been fixed.

Things that changed:

URL vs. Site Errors:

The crawl errors have been organized into two main categories: URL errors and Site errors. Site errors are mostly site-wide as opposed to specific URL errors.

URL errors categorized as:
1.Not followed: These are URLs that prompt redirects that Googlebot had crawling issues (due to a redirect loop). The UI lists whether the URL initially returned to 301 or 302, but has no details of the redirect error.

2.Not found: Normally, these are URLs that return a 404 or 410.

3.Access denied: These URLs returned a 401, 403 or 407 response code. This simply means that the URLs prompt a login, and it is unlikely an error. You might want to block these URLs in order to improve crawl efficiency.

Soft 404: These URLs are detected as returning an error in page but 404 response codes isn’t returned.

Site errors categorized as:
Robots.txt Fetch: These errors are specific to the robots.txt file. If Googlebot receives an error in server while trying to access this file, there does no way to know if the robot.txt file exists.

Server connectivity: The issues include things like connection reset, connection refused, no response, and network unreachable.

DNS: The errors in this include DNS error, domain name not found, and DNS lookup timeout.

Google displays trends over the last 90 days for each type of error. Everyday count is the aggregate count of the URLs with the error type google knows about. As Google, recrawls URL and the error is no longer shown, it is removed from the count and list.

Comments

Leave A Reply