开发者

In TDD, how many tests should I write for a method?

开发者 https://www.devze.com 2023-03-02 15:05 出处:网络
I want implement a method which tells me if the coordinates (x and y) are out of bounds. How many tests should I write? To me it seems to be 5:

I want implement a method which tells me if the coordinates (x and y) are out of bounds. How many tests should I write? To me it seems to be 5:

  1. Test f开发者_Go百科or negative x over bound
  2. Test for positive x over bound
  3. Test for negative y over bound
  4. Test for positive y over bound
  5. Test for with bounds

Am I creating redundant tests and should I only have 1 test for each method I want to implement?


This isn't usually the way we think about it in TDD. It's more: "what test do I need next?" So, typically, I'd start with (pseudocode)

given: bounds (5, 10, 15, 20)
assert: outOfBounds(0, 0)

and make that pass with

outOfBounds(x, y): return true

But I know that's not real yet, so I know I need another test.

assert: !outOfBounds(5, 10)

So now that fails. What's the simplest thing that could possibly work? Maybe

outOfBounds(x, y): return x == 0

Of course I know I'm still faking it, so I need another test. This keeps going 'til I'm not faking it any more. Maybe, in this case, I'd wind up with the same 5 cases you do with your "how many tests" question - but maybe I'll realize I'm done a little sooner than that.

A better question is: Do I need another test?


You need to write sufficient tests to cover off the behaviour you expect to see from your method - no more, no less.

Indeed, if you're practising TDD (as the title suggests) then the behaviour of your method should have been driven out by the tests you wrote, rather than the other way around - so you will already have found the optimal number of tests for the functionality you've written to make them pass. (Though it's common to think of edge cases and failure cases after having driven out the happy-path functionality, which I guess is what's happened here?)

For this specific case, the five tests you've described here sound perfectly sensible to me.


A previous employer hired Kent Beck to do a two day seminar on TDD for our group. And I asked him something very similar, something like "How do you know if you have enough tests?" His answer was "Do you feel like you have enough tests?" Of course, he wasn't asking "Do you feel like you've done enough work for today?" or "Would you rather be fishing, and if so, stop writing tests." His point was, when you think you've exhausted all the ways your unit can be tested and shown to work (or fail) correctly, then you're done.

And of course, when you find a bug in that unit, then you realize "Maybe I wasn't done." Then you add more tests, and then fix your bug.


In my opinion, I would revert back to the rule of thumb for testing with good data, bad data, no data. So for a method with one input and a return value, I would think I would need a minimum of three tests. I'd like to hear what others think of this approach.


I would, personally, say that you need one test case.

Within that case you should check all the boundaries that you need to.

So, 1 test 'method' that checks the 5 boundaries.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号