开发者

jQuery: Traversing AJAX response in Chrome/Safari

开发者 https://www.devze.com 2023-01-03 11:19 出处:网络
I\'m trying to traverse an AJAX response, which contains a remote web page (an HTML output). My goal is to iterate through the \'script\', \'link\', and \'title\' elements of the remote page - load t

I'm trying to traverse an AJAX response, which contains a remote web page (an HTML output).

My goal is to iterate through the 'script', 'link', and 'title' elements of the remote page - load them if necessary, and embed its contents to the current page.

Its working great in FF/IE, but for some reason - Chrome & Safari behaves differently: When I run a .each() loop on the response, Chrome/Safari seems to omit everything that is under the section of the page.

Here's my current code:

$.ajax({
    url: 'remoteFile.php',
    cache: false,
    dataFilter: function(data) { 
        console.log(data); 
        /* The output seems to contain the entire response, including the <head> section - on all browsers, including Chrome/Safari */

        $(data).filter("link, script, title")开发者_开发百科.each(function(i) {
            console.log($(this)); 
            /* IE/FF outputs all of the link/script/title elements, Chrome will output only those that are not in the <head> section */
        });

        console.log($(data)); 
        /* This also outputs the incomplete structure on Chrome/Safari */

        return data;
    },
    success: function(response) {}
});

I've been struggling with this problem for quite a while now, i've found some other similar cases on google searches, but no real solution. This happens on both jQuery 1.4.2, and jQuery 1.3.2.

I really don't want to parse the response with .indexOf() and .substring() - it seems to me that it will be an overkill for the client.

Many thanks in advance!


I think this has to do with how jQuery processes HTML strings and creates DOM nodes from them. Amongst a bunch of other things, jQuery will create a temporary <div> and set its innerHTML to whatever HTML you pass to $(...) thus producing a bunch of DOM nodes which can be extracted from the <div> and handed back to you as a jQuery collection.

The problem, I believe, occurs because of the <html>, <head> and <body> elements, all of which don't respond well to being appended to a <div> element. Browsers tend to behave differently, -- some appear to ignore these top-level elements and just hand you back their contents -- others completely ignore the elements, and won't even give you their descendants.

It seems, the way to avoid this cross-browser issue is to simply replace the troublesome elements with some other fake elements before parsing. E.g.

$(
    // replace <html> with <foohtml> .. etc.
    data.replace(/<(\/?)(head|html|body)(?=\s|>)/g, '<foo$1$2')
).filter("link, script, title").each(function(i) {
    console.log($(this));
    // do your stuff
});

Also, I don't think filter will be sufficient, since it won't target descendent elements. This may be a better approach:

$(
    // replace <html> with <foohtml> .. etc.
    data.replace(/<(\/?)(head|html|body)(?=\s|>)/g, '<foo$1$2')
).find('link,script,title').andSelf().filter("link,script,title").each(function(i) {
    console.log($(this));
    // do your stuff
});
0

精彩评论

暂无评论...
验证码 换一张
取 消