Metadata-Version: 2.4
Name: robotexclusionrulesparser
Version: 1.7.1
Summary: A robots.txt parser alternative to Python's robotparser module
Home-page: http://nikitathespider.com/python/rerp/
Download-URL: http://nikitathespider.com/python/rerp/robotexclusionrulesparser-1.7.1.tar.gz
Author: Philip Semanchuk
Author-email: philip@pyspoken.com
Maintainer: Philip Semanchuk
License: http://creativecommons.org/licenses/BSD/
Keywords: robots.txt robot parser
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: POSIX
Classifier: Operating System :: Unix
Classifier: Environment :: Win32 (MS Windows)
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Utilities
License-File: LICENSE
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: download-url
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: license-file
Dynamic: maintainer
Dynamic: summary

Robotexclusionrulesparser is an alternative to the Python standard library
module robotparser. It fetches and parses robots.txt files and can answer
questions as to whether or not a given user agent is permitted to visit a 
certain URL.

This module has some features that the standard library module robotparser 
does not, including the ability to decode non-ASCII robots.txt files, respect
for Expires headers and understanding of Crawl-delay and Sitemap directives 
and wildcard syntax in path names.

Complete documentation (including a comparison with the standard library
module robotparser) is available in ReadMe.html.

Robotexclusionrulesparser is released under a BSD license.
