I've built a web site using jquery to make nice transitions between content.
The code works this way: there are 2 img开发者_如何转开发s (body and footer).
When I click on a link (instead of going to another page) I fade out the 2 imgs and change the src attribute of the 2. When the new imgs are loaded I fade them back in.
I'm using SWFaddress
to allow user go directly to internal content.
Now I'd like to make my content indexed by google and other Search engines, all the text content is inside the imgs, So I've got the text in ALT attribute.
My question is:
If a dynamically change the imgs ALT attribute using JS, will spiders be able to read it properly?
Consider that I'm using SWFaddress
to create a sitemap.
Search engine robots generally do not process JavaScript. So no.
You're doing it wrong.
If you want a website with a lot of JS to be good for both bots and human without JS enabled (think of blinded people with screen reader for instance), you need to develop your website with content in text format and without any javascript.
Then you use high level JavaScript framework like jQuery to replace the content and change the navigation, form submission etc. as you want when the page is loaded (you know, the well-known $(document).ready(function(){/*...*/});
.
This way you'll have the good parts of both worlds: "cool" animations and good accessibility (which means good SEO).
I'm not familiar with SWFaddress so my advice could be off.
But Googlebot will crawl and index some javascript. The same can't necessarily be said about Bing/Yahoo.
Google understands that sites are evolving and things like Flash and heavily used AJAX sites are popular, and to achieve their goal of "Organizing the World's Information", they need to get at it.
You can find information about Google's ability to crawl and index flash here: http://googlewebmastercentral.blogspot.com/2008/06/improved-flash-indexing.html
And more recently they talked about how they are crawling and indexing AJAX / XHR content when they're reasonably sure of the content: http://googlewebmastercentral.blogspot.com/2011/11/get-post-and-safely-surfacing-more-of.html
If you look at github they have a very slick AJAX experience, but as you navigate through the folders of a repo, it makes POST requests to get the additional XHR information. With the new Google crawling abilities, they should be able to more easily index github content without having to fall back to the non-HTML5 un-popstate experience.
But I would echo the other responses that you really should strive to make your site accessible to disabled users, which is more than just users using a screen reader. It sounds like you're doing that already, so kudos to you.
Bottom line, the AJAX content you're creating has a good chance of getting properly indexed, however, you might want to implement it in the way that Google's said they know how.
精彩评论