I've got the opportunity to rewrite the core of an internally-developed application that my employer uses for document control. My "core" requirements list goes something like this:
- Make it easier to import/export to various formats (collection of files + fairly extensive metadata being the common factor)
- Make it easier to add new fields (whose presence is data-driven rather than global) at multiple levels
- Introduce several new pieces of functionality which violate the fundamental premise of the old system (basically, the structure o开发者_StackOverflow中文版f metadata surrounding documents is undergoing a radical change)
- Maintain the ability to tightly control document and metadata relations and conventions
I've been playing around with an architecture that uses serialization as its primary means of communication with the world, and so far I'm pleased with the results - I can serialize to & deserialize from a user interface, an XML store, and a database with ease without modifying the core classes to accomodate the various sources and sinks. I consider this to be fundamentally a hexagonal architecture - it treats every serialization target the same way (as an injectable dependancy for the Serialize method).
This is my first go around with this approach, however, and I'm wondering if anyone has any experience with it, and any insights or advice if so.
My first instinct is that anything that depends heavily on serialization of your core classes is likely to run into hairy versioning issues - changes to your core are going to require simultaneous modification of all of your serialization providers & consumers (and probably all of your persistent stores), rather than a service/contract-based approach which would allow the interface to remain static where possible.
However, it's really difficult to give any sort of opinion without making a large set of assumptions about how the system is going to be used & evolve over time - if you're happy with the approach, continue with it & let us know how it goes.
精彩评论