It seems that in many unit tests, the values that parameterize the test are either baked in to the test themselves, or declared in a predetermined way.
For example, here is a test taken from nUnit's unit tests (EqualsFixture.cs):
[Test]
public void Int()
{
int val = 1;
int expected = val;
int actual = val;
Assert.IsTrue(expected == actual);
Assert.AreEqual(expected, actual);
}
开发者_如何转开发This has the advantage of being deterministic; if you run the test once, and it fails, it will continue to fail until the code is fixed. However, you end up only testing a limited set of values.
I can't help but feel like this is a waste, though; the exact same test is probably run with the exact same parameters hundreds if not thousands of times across the life of a project.
What about randomizing as much input to all unit tests as possible, so that each run has a shot of revealing something new?
In the previous example, perhaps:
[Test]
public void Int()
{
Random rnd = new Random();
int val = rnd.Next();
int expected = val;
int actual = val;
Console.WriteLine("val is {0}", val);
Assert.IsTrue(expected == actual);
Assert.AreEqual(expected, actual);
}
(If the code expected a string, perhaps a random string known to be valid for the particular function could be used each time)
The benefit would be that the more times you run a test, the much larger set of possible values you know it can handle correctly.
Is this useful? Evil? Are there drawbacks to this? Am I completely missing the point of unit testing?
Thank you for your thoughts.
You want your unit tests to be repeatable so that they will always behave in the same way unless the code changes. Then, if the code changes and causes the unit test to fail, you can fix the code and the unit test has served its purpose. Futhermore, you know that the code is [probably] fixed when the unit test passes again.
Having random unit tests could find unusual errors, but it shouldn't be necessary. If you know how the code works (compare white box and black box approaches to testing), using random values shouldn't ever show anything that well thought of non-random unit tests would. And I'd hate to be told "run the tests a few times and this error should appear".
What you are proposing makes a lot of sense, provided that you do it correctly. You don't necessarily always have to listen only to conventional wisdom that says that you must never have non-determinism in your tests.
What is really important is that each test must always exercise the same code path. That is not quite the same thing.
You can adopt what I call Constrained Non-Determinism in unit testing. This can drive you towards a more Specification-Oriented way of writing tests.
The tests should cover your use cases, no more.
Have a look at PEX.
The big problem I have with this is that since this is random, it may not cause a failure until the test has been run 200, 2000, 3000 or more times. If it fails on try 6007 months or years after the case has been written, it essentially means you have had a bug for months or years and never knew.
Instead I think it is much more useful to know your corner cases and test them all specifically. In other words think about what data could break your code and test for it.
Are you testing Random or the equality operator?
It seems to me that you would choose typical values plus boundary conditions, or brute force the entire Integer set. Simply choosing random integers doesn't help with either approach.
To answer your final question, what is the point of unit testing, my feeling is this: the value of proving repeatable results exceeds the cost of writing the tests.
If your application is non deterministic you are wasting your time with testing, unless it is a very small application.
精彩评论