I'm trying to look at a html file and remove all the tags from it so that only the text is left but I'm having a problem with my regex. This is what I have so far.
import urllib.开发者_如何学编程request, re
def test(url):
html = str(urllib.request.urlopen(url).read())
print(re.findall('<[\w\/\.\w]*>',html))
The html is a simple page with a few links and text but my regex won't pick up !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" and 'a href="...." tags. Can anyone explain what I need to change in my regex?
Use BeautifulSoup. Use lxml. Do not use regular expressions to parse HTML.
Edit 2010-01-29: This would be a reasonable starting point for lxml:
from lxml.html import fromstring
from lxml.html.clean import Cleaner
import requests
url = "https://stackoverflow.com/questions/2165943/removing-html-tags-from-a-text-using-regular-expression-in-python"
html = requests.get(url).text
doc = fromstring(html)
tags = ['h1','h2','h3','h4','h5','h6',
'div', 'span',
'img', 'area', 'map']
args = {'meta':False, 'safe_attrs_only':False, 'page_structure':False,
'scripts':True, 'style':True, 'links':True, 'remove_tags':tags}
cleaner = Cleaner(**args)
path = '/html/body'
body = doc.xpath(path)[0]
print cleaner.clean_html(body).text_content().encode('ascii', 'ignore')
You want the content, so presumably you don't want any javascript or CSS. Also, presumably you want only the content in the body and not HTML from the head, too. Read up on lxml.html.clean to see what you can easily strip out. Way smarter than regular expressions, no?
Also, watch out for unicode encoding problems. You can easily end up with HTML that you cannot print.
2012-11-08: changed from using urllib2 to requests. Just use requests!
import re
patjunk = re.compile("<.*?>| |&",re.DOTALL|re.M)
url="http://www.yahoo.com"
def test(url,pat):
html = urllib2.urlopen(url).read()
return pat.sub("",html)
print test(url,patjunk)
精彩评论