This repo contains robots.txt
file template for CS-Cart and Multi-Vendor.
For more information about how robots.txt
files work, and how to use them, the following resources are a great place to start:
- Introduction to robots.txt
- Robots.txt IETF specification draft
- Robots.txt Google specifications
- The Web Robot Pages is an information resource dedicated to web robots
A robots.txt file tells search engine crawlers which pages or files the crawler can or can't request from your site.
Sometimes search engines can index pages, which should not be accessible to the web. Directives "Disallow" in robots.txt with specific locations can make your website more secure.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.