开发者

Compiling huge schema into Java

开发者 https://www.devze.com 2023-02-25 00:10 出处:网络
There are two major tools which provides a way to compile XSD schema into Java: xmlbeans and JAXB. The problem is the XSD schema is really huge: 30MB of XML files, most of the schema isn\'t used in m

There are two major tools which provides a way to compile XSD schema into Java: xmlbeans and JAXB.

The problem is the XSD schema is really huge: 30MB of XML files, most of the schema isn't used in my project, so I can comment out most of the code, but it 开发者_JAVA百科not a good solution.

Currently my project uses xmlbeans which compiles the schema with major changes. It produces ~60MB of classes and it takes ~30 min to compile.

Another solution is to use JAXB, which generates ~14MB of code without need to edit the code. But it produces huge ObjectFactory class, which fails to compile with "too many constants" error. I can throw the class away and compile the schema without it, but as I understand, it's very useful class.

Any ideas how to handle this huge schema?


Could you create a script to extract the portion(s) of the schema you need and integrate that into your build process prior to mapping with XmlBeans or JAXB?

You could probably script this extraction fairly simply and easily in Python, Perl, Awk, etc; or even in XSL if you have expertise there (I've never spent enough contiguous time coding XSL to get proficient, so I'd probably stick to a scripting language, but that's just me).

e.g.:

python extract.py big-schema.xsd >small-schema.xsd
xsd2java <args> small-schema.xsd
...

You might find that a subsequent update by the 3rd-party vendor would invalidate your extraction script, but unless they're making very large changes to the overall schema, you should be able to update the script fairly quickly, and it sounds like those updates should be fairly infrequent.

Incidentally, I'm a little partial to XmlBeans; when we did our own evaluation of XML-Java mapping tools, it seemed to handle constructs like xs:choice, xs:all, and type-substitution better than anything else we tried. But that was several years ago, and could certainly have changed by now. At this point, we're continuing to use it more out of institutional inertia than anything else, so take that recommendation with a dash of salt.


30Mb of schema? What on earth is this - I'd be interested to know if it's available as a test case for schema processors.

Data mapping (a la JAXB) works best with small schemas. I've seen people really struggle when the schema gets as large as about 200 element types. You must be dealing with something a couple of orders of magnitude larger here - I would say it's a non starter.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号