开发者

Design: Separating Data and its view

开发者 https://www.devze.com 2023-03-13 17:05 出处:网络
There\'s some legacy code that I would like to refactor. There is some data obtained through reading some registers. This data is represented in csv and xml files.

There's some legacy code that I would like to refactor.

There is some data obtained through reading some registers. This data is represented in csv and xml files.

The current way is dirty. There's no separation between data and the view (XML, CSV). So actually, for each format, the collection of data is done each time.

To give you a picture, its currently like this:

A::Timestamp()
{
  //does some data collection and dumps to csv file
  //the header for this csv file is built in PreTimeStamp function.
  //depending on some command line options certain cols are added.
  filehndle << data1 << ","<<data2<<"," << data3;

  if( cmd_line_opt1 )
  {
    filehndle << "," << statdata1 <<","<<statdata2;
  }
} 

A::PreTimeStamp()
{
  //header for csv file
  filehndle << "start, end, delta";
  if( cmd_line_opt1 )
  {
     filehndle << "," << "statdata1 , statdata2";
  }

}
开发者_运维百科

There's another class B::Profile() which does the data collection the same way as A::Timestamp does, but the data is dumped as XML.

I would want to refactor it to have the data collection in a common place. And use some adaptors for csv and xml to take the data and dump it in that format.

Now I would need some help on what model I could use to represent the data. The data collected is not fixed so I cant model it a struct or some static types. The cols that are added to csv file depends on command line options.

And the next help would I could plug classes like say, xmlWriter and CsvWriter to this data model?


I recommend using the Strategy Pattern for this. The TimeStamp and PreTimeStamp declarations would be pure virtual (ie. virtual void Timestamp() =0) in the 'Dumper' interface and the Dumper_A and Dumper_B implementations would be declared virtual. The class loading the data would then be assigned the appropriate implementation of Dumper to handle the dumping of the data.


Is it tabular data? If so, you might consider using a vector of vectors.

And how I would structure this is to have the data collection implemented in an abstract base class, and then have subclasses for the xml and csv versions that implement the write functions.


Leaving aside the question of whether or not you will actually get any tangible benefit for this work, I would say XML itself is a good choice for a intermediate format, so long as you don't need high performance. You can represent any document with it, there is a good tool chain around it, it's somewhat human readable (though not as good as others like YAML) and you are already using it as one of the native formats of your data. I don't think introducing a third format like YAML or JSON or whatever is going to be worth your time.


I would use XML as the intermediate format and write several (1 for now) XSL transforms to transform the data into the other formats I need. It would be pretty simple to transform to CSV.


Looking at your requirement, a MVC pattern might be of great help to you.

You have the data(model), you have the controller(events,cmd options). so only thing you need to have is the view. you can have an abstract view base class and then inherit your specific view classes such as XML and CSV(may me more in future), which will present the model in a specific format


You need a structure including every possible collected datum for a particular iteration and an adapter class that maps textual fields (desired property labels in the output) to member pointers in that structure (possibly functions for parsing or generating the desired type from the textual input or output, too). If it is not possible or desirable to have a structure with every possible collected datum, you can just use a map like a prototyped object.

0

精彩评论

暂无评论...
验证码 换一张
取 消