We're re-evaluating our database upgrade process for our application to try and remove the pain of having to generate all the upgrade scripts for a release at the end of the release cycle. We're looking to move towards a more evolutionary process, using migrations which are checked in alongside features with a tool such as migratordotnet, and this seems like a very clean way of managing schema changes.
However, the default data which is shipped with our database is quite regularly subject to change and some of these data updates do not fit well with the migration process. For example, inserts to tables which have an Identity primary key are not easily identifiable and so cannot be reversed on downgrading.
So I'm wondering how people manage the migration of default data? Do they managed it outside of the scheme migration process? Or ar开发者_Python百科e inserts performed during migrations but the removal of the data is not performed during downgrades?
For us, DB migration is part of the daily development process. Developers have to either commit a program or a script which performs the necessary changes. This happens as soon the related feature is implemented and never "at the end of the release cycle".
Downgrade is rarely an issue but if you have it, create a column with a version information. When the upgrade succeeds and the customer decides to stay with the new version, drop the column again (or keep it).
The key for success is that we have extensive test cases which create the database from scratch (using an In-Memory DB like H2 or a DB which is installed on each developer machine) including all data and then migrate it all the way through and back. We can import anonymized data from the productions servers into the test cases to track down bugs and improve tests without violating the privacy of our customers or getting in the way of the developers.
精彩评论