开发者

Emulate javascript _dopostback in python, web scraping

开发者 https://www.devze.com 2023-01-19 14:36 出处:网络
Here LINK it is suggested that it is possible to "Figure out what the JavaScript is doing and emulate开发者_高级运维 it in your Python code: " This is what I would like help doing ie my ques

Here LINK it is suggested that it is possible to "Figure out what the JavaScript is doing and emulate开发者_高级运维 it in your Python code: " This is what I would like help doing ie my question. How do I emulate javascript:__doPostBack ?

Code from a website (full page source here LINK:

<a style="color: Black;" href="javascript:__doPostBack('ctl00$ContentPlaceHolder1$gvSearchResults','Page$2')">2</a>

Of course I have basically know idea where to go from here.

Thanks in advance for your help and ideas

Ok there are lots of posts asking how to CLICK a javascript button when web scraping with python libraries mechanize, beautifulsoup....,similar. I see a lot of "that is not supported" responses use THIS non python solution. I think a python solution to this problem would be of great benefit to many. In that light I am not looking for answers like use x,y or z which are not python code or require interacting with a browser.


The mechanize page is not suggesting that you can emulate JavaScript in Python. It is saying that you can change a hidden field in a form, thus tricking the web server that a human1 has selected the field. You still need to analyse the target yourself.

There will be no Python-based solution to this problem, unless you wish to create a JavaScript interpreter in Python.

My thoughts on this problem have led me to three possible solutions:

  1. create an XULRunner application
  2. browser automation
  3. attempt to interpret the client-side code

Of those three, I've only really seen discussion of 2. I've seen something close to 1 in a commercial scraping application, where you basically create scripts by browsing on sites and selecting things on the pages that you would like the script to extract in the future.

1 could possibly made to work with a Python script by accepting a serialisation (JSON ?) of wsgi Request objects, getting the app to fetch the URL, then sending the processed page as a wsgi Response object. You could possibly wrap some middleware around urllib2 to achieve this. Overkill probably, but kind of fun to think about.

2 is usually achieved via Selenium RC (Remote Control), a testing-centric tool. It provides a few methods like getHtmlSource but most people that I've heard using it get don't like its API.

3 I have no idea about. node.js is very hot right now, but I haven't touched it. I've never been able to build spidermonkey on my Ubuntu machine, so I haven't touched that either. My hunch is that in order to do this, you would provide the HTML source and your details to a JS interpreter, that would need to fake being your User-Agent etc in case the JavaScript wanted to reconnect with the server.

1 well, more technically, a JavaScript compliant User-Agent, which is almost always a web browser used by a human

0

精彩评论

暂无评论...
验证码 换一张
取 消