database of robots.txt-derived permissions
DescriptionWWW::RobotRules parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html>. Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.
Upload more screenshots
Please help extend the collection of screenshots. Just make a screenshot and upload it here. You don't need to register or anything.Upload a screenshot
Hint: upload an image here from your clipboard with Ctrl-V
Install this software package
If the package is available for the distribution you are currently using on your computer then install the software by clicking on…Install libwww-robotrules-perl