Can anyone give me a good reason why not to use the hijax (Progressive enhancement) method in addition to 开发者_开发技巧the hashbang method google proposes? As far as i can see, the hijax method is still the better one:
- it works for no-javascript browsers
- all search engines can index
The only counter argument i found so far is when they click on a link in a search engine and you have javascript enabled you'll need to do a redirect to the javascript enabled version (with the #-tag).
For Google's hashbang version it's difficult to supply a no-javascript based version and Bing and Yahoo can't crawl your website.
Kind regards,
Daan
The "value allocation" answer isn't quite correct.
The question is regarding surfacing content for search engines. Hashbang is Google's answer for that. That said, a user (or another search engine or social network scraper that doesn't support hashbang) who doesn't have JS enabled will never see your content. Google can see it because they're the one's checking for hashbang.
Hijax, on the other hand, always allows non-JS users/bots to see your content because it does not rely on hash/hashbang. Hijax relies on standard query string parameters. This means your application must have back-end logic to render your content for non-JS user agents. In the end, with Hijax JS enabled users get the asynchronous experience and non-JS enabled users get full page loads.
Google continues to recommend Hijax. Hashbang is their offering for non-hijax apps already out there in the wild, and/or JS apps that don't have a back-end.
http://googlewebmastercentral.blogspot.com/2007/11/spiders-view-of-web-20.html (see progressive enhancement section)
I think this is not an issue any more, since Bing (this means Yahoo as well) started crawling ajax pages employing google's hashbang proposal!
Lense about ajax-crawling in Bing
The reason is value allocation
Hijax
Ok lets say a user links to
http://www.example.com/stuff#fluff
The link actually counts as a link to
http://www.example.com/stuff#fluff
but ashttp://www.example.com/stuff#fluff
andhttp://www.example.com/stuff
are the same HTML content, google will canonicalize (summarize) the value allocation tohttp://www.example.com/stuff
Your site www.example.com/stuff/fluff that you communicated to non javascript clients (googlebot) does not come up in this whole process
Fazit: so basically a link to http://www.example.com/stuff#fluff
is seen by google as a vote for http://www.example.com/stuff
Hashbang
A user links to
http://www.example.com/stuff#!fluff
Googlebot interpretes it as
www.example.com/stuff?_escaped_fragment_=fluff
And as it offers different content (i.e.: different content from
www.example.com/stuff
) google will not canonicalize (summerize) it with any other URL.Google will display
http://www.example.com/stuff#!fluff
to it's users
Fazit: A link to http://www.example.com/stuff#!fluff
is seen by google as a vote for www.example.com/stuff?_escaped_fragment_=fluff
(but displayed to it's users as http://www.example.com/stuff#!fluff
)
Use dual links (AJAX and normal links), they are compatible with Bing, Yahoo and others
Take a look to Single Page Interface and Search Engine Optimization
Have a look at this example http://www.amitpatil.me/create-gmail-like-app-using-html5-history-api-and-hashbang/
精彩评论