This is a really simple link crawler for pages with sitemap.xml available.
Have you ever come across a 404 link on some blog? Maybee want to verify that all links on your blog is active and working? Well, then this is one way to go look for broken links.
- Clone or download and unzip to your location of choice
- Navigate to folder and run
go run . https://example.com/sitemap.xml
- Need help or curious about available flags? Run
go run . -h
- The file reads sitemap.xml and collect all
<loc>
elements and the link inside. If the sitemap.xml contains a sitemap index, it will crawl the index and fetch links from all sitemaps linked. - After fetching all page links in sitemap, it will make a visit to every page, fetch all content through a HTTP GET request.
- Then it reads that file content, try to find all
<a href="">
tags and fetch the URL inside. - After this, it will verify that it is a valid URL and make a HEAD-request for that URL. At the same time, it will also save that URL in memory to make sure that unique URLs don't get multiple requests.
- It will then get the HTTP status code from that request and save those with a 3xx, 4xx or 5xx responses for displaying and log output later.
This script needs some limits. Running it on large sitemaps will probabably cause errors due to too many goroutines launching. This is on the to do list for a rainy day.