开发者

Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information

开发者 https://www.devze.com 2022-12-11 13:26 出处:网络
I am trying to use Python to write a client that connects to a custom http server that uses digest authentication.I can connect and pull the first request without problem.Using TCPDUMP (I am on MAC OS

I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload.

Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2.

In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment).

Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200.

Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request?

[Re-read that a couple times until it makes sense, I am not sure how to make it any more plain]

Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either.

Pycurl won't work either.

I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible.

In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 f开发者_运维百科or second).

If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears.

Thanks in advance.


Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).

0

精彩评论

暂无评论...
验证码 换一张
取 消