开发者

How to make Beautiful Soup output HTML entities?

开发者 https://www.devze.com 2023-01-15 20:49 出处:网络
I\'m trying to sanitize and XSS-proof some HTML input from the client. I\'m using Python 2.6 with Beautiful Soup. I parse the input, strip all tags and attributes not in a whitelist, and transform the

I'm trying to sanitize and XSS-proof some HTML input from the client. I'm using Python 2.6 with Beautiful Soup. I parse the input, strip all tags and attributes not in a whitelist, and transform the tree back into a string.

However...

>>> unicode(BeautifulSoup('text < text'))
u'text < text'

That doesn't look like valid HTML to me. And with my tag stripper, it opens the way to all sorts of nastiness:

>>> print BeautifulSoup('<<script></script>script>alert("xss")<<script></script>script>').prettify()
<
<script>
</script>
s开发者_JAVA技巧cript>alert("xss")<
<script>
</script>
script>

The <script></script> pairs will be removed, and what remains is not only an XSS attack, but even valid HTML as well.

The obvious solution is to replace all < characters by &lt; that, after parsing, are found not to belong to a tag (and similar for >&'"). But the Beautiful Soup documentation only mentions the parsing of entities, not the producing of them. Of course I can run a replace over all NavigableString nodes, but since I might miss something, I'd rather let some tried and tested code do the work.

Why doesn't Beautiful Soup escape < (and other magic characters) by default, and how do I make it do that?


N.B. I've also looked at lxml.html.clean. It seems to work on the basis of blacklisting, not whitelisting, so it doesn't seem very safe to me. Tags can be whitelisted, but attributes cannot, and it allows too many attributes for my taste (e.g. tabindex). Also, it gives an AssertionError on the input <SCRIPT SRC=http://ha.ckers.org/xss.js></SCRIPT>. Not good.

Suggestions for other ways to clean HTML are also very welcome. I'm hardly the only person in the world trying to do this, yet there seems to be no standard solution.


I know this is 3.5yrs after your original question, but you can use the formatter='html' argument to prettify(), encode(), or decode() to produce well-formed HTML.


The lxml.html.clean.Cleaner class does allow you to provide a tag whitelist with the allow_tags argument and to use the precomputed attribute whitelist from feedparser with the safe_attrs_only argument. And lxml definitely handles entities properly on serialization.

0

精彩评论

暂无评论...
验证码 换一张
取 消