开发者

python url regexp

开发者 https://www.devze.com 2023-01-10 07:00 出处:网络
I have a regexp and i want to add output of regexp to my url for exmaple url = \'blabla.com\' r = re.findall(r\'<p>(.*?</a>))

I have a regexp and i want to add output of regexp to my url

for exmaple

url = 'blabla.com'
r = re.findall(r'<p>(.*?</a>))

r output - /any_string/on/any/server/
开发者_运维知识库

but a dont know how to make get-request with regexp output

blabla.com/any_string/on/any/server/


Don't use regex to parse html. Use a real parser.

I suggest using the lxml.html parser. lxml supports xpath, which is a very powerful way of querying structured documents. There's a ready-to-use make_links_absolute() method that does what you ask. It's also very fast.

As an example, in this question's page HTML source code (the one you're reading now) there's this part:

<li><a id="nav-tags" href="/tags">Tags</a></li>

The xpath expression //a[@id='nav-tags']/@href means: "Get me the href attribute of all <a> tags with id attribute equal to nav-tags". Let's use it:

from lxml import html

url = 'http://stackoverflow.com/questions/3423822/python-url-regexp'

doc = html.parse(url).getroot()
doc.make_links_absolute()
links = doc.xpath("//a[@id='nav-tags']/@href")
print links

The result:

['http://stackoverflow.com/tags']


Just get beautiful soup:

http://www.crummy.com/software/BeautifulSoup/documentation.html#Parsing+a+Document

import urllib2
from BeautifulSoup import BeautifulSoup

page = urllib2.urlopen(url)
soup = BeautifulSoup(page)
soup.findAll('p')
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号