开发者

Unit testing - How to set up test data when stubbing database data

开发者 https://www.devze.com 2022-12-18 05:57 出处:网络
In our unit testing, I\'ve got a stub object that is creating a set of data in memory to be used during unit testing so that the live database is not used.

In our unit testing, I've got a stub object that is creating a set of data in memory to be used during unit testing so that the live database is not used.

I have unit tests that check the number of rows returned from this set using the query under test and the values supplied to the query in the test. My first issue is that because we are using MSTest and it does not support parametized tests, we have one test for each different set of values and have ended up with many many tests, only differing by values supplied to the one routine. It may be difficult politically to use a different testing framework.

Also working with the data is somewhat unwieldly as it is created by adding entities to a set through co开发者_如何学Gode so it's difficult to easily see what data is in the set, and if we decide to add records to this set in the future, we need to update the number of records that should be returned in the tests so our tests depend very tightly on this data. There seems no way to automate this. Is that the case?


  1. Since you already ruled out using another unit testing framework, how about writing your own take on parameterized tests. Write a test that loops through the different data sets, calling a private helper method with different parameters. Collect the result of each data-set run to a 'collecting parameter'. I'd suggest that you log only errors/failed data-sets to reduce noise. At the end of the loop, if the collecting parameter is not empty, issue the equivalent of Assert.Fail and log the results to the console. (The downside is that you can't see individual tests in the GUI and if the org is monitoring number of tests, you get only +1 for all this work.)
  2. This gives you the benefit of doing as specialized a failure message as you wish - you can include the essential bits in the failure trace. This will help you 'quickly see' which scenario failed.


Have a look at how the Visual Studio 2010 ultimate edition does this for database testing (you can download a fully configured VPC).

A option would be to add "context" to your tests, so when you initialize the test the context gets initialized with the parameters required for the test. You can either access the parameters via code in you test method, or dynamical assign it to the code to be tested (might not be the best option).

Also you can add the expected results or better yet conditions that the test should comply with. These conditions can be initialized from some sort of data source (e.g. database) and added as a dataset. Create a method that will evaluate the conditions for the test method.

Consider building specific classes to handle different context settings or conditions and create a base test class (which adds the functionality) from which you test class can inherit.

0

精彩评论

暂无评论...
验证码 换一张
取 消