开发者

Python's robotparser ignoring sitemaps

开发者 https://www.devze.com 2023-01-02 03:17 出处:网络
I\'ve the following robots.txt User-agent: * Disallow: /images/ Sitemap: http://www.example.com/sitemap.xml

I've the following robots.txt

User-agent: *
Disallow: /images/
Sitemap: http://www.example.com/sitemap.xml

and the following robotparser

def init_robot_parser(URL):
    robot_parser = robotparser.RobotFileParser()
    robot_parser.set_url(urlparse.urljoin(URL, "robots.txt"))
    robot_parser.read()

    return开发者_开发百科 robot_parser

But when I do a print robot_parser Above return robot_parser all I get is

User-agent: *
Disallow: /images/

Why is it ignoring the Sitemap line, am I missing something?


Sitemap is an extension to the standard, and robotparser doesn't support it. You can see in the source that it only processes "user-agent", "disallow", and "allow". For its current functionality (telling you whether a particular URL is allowed), understanding Sitemap isn't necessary.


You can use Repply ( https://github.com/seomoz/reppy ) to parse Robots.txt - it handles sitemaps.

Keep in mind though, that in some cases there is a sitemap on the default location (/sitemaps.xml), and the site owners didn't mention it within robots.txt (for example on toucharcade.com)

I also found at least one site which has its sitemaps compressed - that is robot.txt leads to a .gz file.

0

精彩评论

暂无评论...
验证码 换一张
取 消