I've got a trouble with Chrome5.0.375.70, but FF 3.6.3 and Opera 10.53 are OK. Below is the line of code:
document.getElementById('content').innerHTML = data.documentElement.innerHTML;
The data object from the code is a document (typeof(data) == 'object') and I've got it by ajax request to chapter01.xhtml:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html [
<!ENTITY D "—">
<!ENTITY o "‘">
<!ENTITY c "’">
<!ENTITY O "“">
<!ENTITY C "”">
]>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Alice's Adventures in Wonderland by Lewis Carroll. Chapter I: Down the Rabbit-Hole</title>
<link rel="stylesheet" type="text/css" href="style.css"/>
<link rel="stylesheet" type="application/vnd.adobe-page-template+xml" href="page-template.xpgt"/开发者_开发技巧>
</head>
<body>
<div class="title_box">
<h2 class="chapnum">Chapter I</h2>
<h2 class="chaptitle">Down the Rabbit-Hole</h2>
<hr/>
</div>
The Chrome cuts all before body and as a result link to css in header is missed; user can't see formatted text and images.
How can I fix it or bypass?
P.S. I try to put chapter01.xhtml into div which is contained by <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
.innerHTHML
is NON STANDARD and is a bad practice and is browser implementation dependent and should not be used. Especially with XHTML it isn't supported.
I had a similar situation where I wanted to filter DOM elements and the best way I found was to use a mix of DOMParser and XmlSerializer found here:
http://www.hiteshagrawal.com/javascript/convert-xml-document-to-string-in-javascript
This is a reasonable cross browser method.
Edit: To elaborate a bit more, you would load in the html string to the DOMParser and be able to work with the DOMDocument as you normally would, then you could use XmlSerializer or the IE equivalent to pump out the new markup.
精彩评论