开发者

Optimizing Data Translation

开发者 https://www.devze.com 2022-12-09 05:24 出处:网络
Our business deals with houses and over the years we have created several business objects to represent them. We also receive lots of data from outside sources, and send data to external consumers. Ev

Our business deals with houses and over the years we have created several business objects to represent them. We also receive lots of data from outside sources, and send data to external consumers. Every one of these represents the house in a different way and we spend a lot of time and energy translating one format into another. I'm looking for some general patterns or best practices on how to deal with this situation. How can I write a univ开发者_StackOverflow社区ersal data translator that is flexible, extensible, and fast.

Background: A house generally has 30-40 attributes such as size, number of bedrooms, roof type, construction material, siding material, etc. These are typically represented as key/value pairs. A typical translation problem is that one vendor will represent the number of bedrooms as a single key/value pair: NumBedrooms=3, while a different vendor will have a key/value pair per bedroom: Bedroom=master, Bedroom=small, Bedroom=small. There's nothing particularly hard about the translation, but we spend a lot of time and energy writing and testing translations. How can I optimize this?

Thanks

(My environment is .Net)


The best place to start is by creating an "internal representation" which is the representation that your processing will always. Then create translators from and to "external representations" as needed. I'd imagine that this is what you are already doing, but it should be mentioned for completeness. The optimization comes from being able to selectively write import and export only when you need them.

A good implementation strategy is to externalize the transformation if you can. If you can get your inputs and outputs into XML documents, then you can write XSLT transforms between your internal and external representations. The goal is to be able to set up a pipeline of transformations from an input XML document to your internal representation. If everything is represented in XML and using a common protocol (say... hmm... HTTP), then the process can be controlled using configuration. BTW - this is essentially the Pipes and Filters design pattern.

Take a look at Yahoo pipes, Apache Cocoon, XML pipeline, and NetKernel for inspiration.


My employer back in the 90s faced this problem. We had a standard format we converted the customers' data to and from, as D.Shawley suggests.

I went further and designed a simple format-description language; we described our standard format in that language and then, for a new dataset, we'd write up its format too. Then a program would take both descriptions and convert the data from one format to the other, with automatic type conversions, safety checks, etc. (This came in handy for some other operations as well, not just these initial/final conversions.)

The particulars probably won't help you -- chances are you deal with completely different kinds of data. You can likely profit from the general principle, though. The "data definition language" needn't necessarily be a fancy thing with a parser and scanner; you might define it directly with a data structure in IronPython, say.

0

精彩评论

暂无评论...
验证码 换一张
取 消