开发者

Ajax Crawling: old way vs new way (#!)

开发者 https://www.devze.com 2023-01-25 10:37 出处:网络
Old way When I used to load page asynchronously in projects that required the content to be indexed by search engines I used a really simple technique, that is

Old way

When I used to load page asynchronously in projects that required the content to be indexed by search engines I used a really simple technique, that is

<a href="page.html" id="example">Page</a>
<script type="text/javascript">
    $('#example').click(function(){
        $.ajax({
            url: 'ajax/page.html',
            success: function(data){
                $('#content').html(data);
            }
        })
   });
</script>

edit: I used to implement the haschange event to support bookmarking f开发者_运维知识库or javascript users.

New way

Recently Google came up with the idea of ajax crawling, read about it here:

http://code.google.com/web/ajaxcrawling/

http://www.asual.com/jquery/address/samples/crawling/

Basically they suggest to change "website.com/#page" to "website.com/#!page" and add a page that contains the fragment, like "website.com/?_escaped_fragment_=page"

What's the benefit of using the new way?

To me it seems that the new way adds a lot more work and complexity to something that before I did in a simple way: I designed the website to work without ajax and then I added ajax and hashchange event (to support back button and bookmarking) at a final stage.

From an SEO perspective, what are the benefits of using the new way?


The idea is to make the AJAX applications crawlable. According to the HTTP specifications, URLs refer to the same document regardless of the fragment identifier (the part after the hash mark). Therefore search engines ignore the fragment identifier: if you have a link to www.example.com/page#content, the crawler will simply request www.example.com/page.

With the new schemes, when you use the #! notation the crawler knows that the link refers to additional content. The crawler transforms the URL into another (ugly) URL and requests it from your web server. The web server is supposed to respond with static HTML representing the AJAX content.

EDIT Regarding the original question: if you already had regular links to static pages, then this scheme doesn't help you.


The advantage is not really applicable for you, because you are using progressive enhancement. The new Google feature is for applications written entirely in Javascript, which therefore can't be read by the crawler. I don't think you need to do anything here.


The idea behind it is that Javascript users can bookmark pages too, I think. If you take a look at your 'old' method, it's just replacing content on the page; there is no way to copy the URL to show the page in current state to other people.

So, if you've implemented the new #! method, you have to make sure that these URLs point to the correct pages, through Javascript.


i think it's just easier for google to be sure that you're not working with duplicate content. i'm including the hash like foo/#/bar.html in the urls and pass it to the permalink structure but i'm not quite sure if google likes or not.

interesting question though. +1

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号