Basically I have been programing for a little while and after finishing my last project can fully understand how much easier it would have been if I'd have done TDD. I guess I'm still not doing it strictly as I am still writing code then writing a test for it, I don't quite get how the test becomes before the code if you don't know what structures and how your开发者_高级运维 storing data etc... but anyway...
Kind of hard to explain but basically lets say for example I have a Fruit objects with properties like id, color and cost. (All stored in textfile ignore completely any database logic etc)
FruitID FruitName FruitColor FruitCost
1 Apple Red 1.2
2 Apple Green 1.4
3 Apple HalfHalf 1.5
This is all just for example. But lets say I have this is a collection of Fruit
(it's a List<Fruit>
) objects in this structure. And my logic will say to reorder the fruitids in the collection if a fruit is deleted (this is just how the solution needs to be).
E.g. if 1 is deleted, object 2 takes fruit id 1, object 3 takes fruit id2.
Now I want to test the code ive written which does the reordering, etc.
How can I set this up to do the test?
Here is where I've got so far. Basically I have fruitManager class with all the methods, like deletefruit, etc. It has the list usually but Ive changed hte method to test it so that it accepts a list, and the info on the fruit to delete, then returns the list.
Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea? and then I test deleting different valued objects / datasets to ensure method is working properly.
[Test]
public void DeleteFruit()
{
var fruitList = CreateFruitList();
var fm = new FruitManager();
var resultList = fm.DeleteFruitTest("Apple", 2, fruitList);
//Assert that fruitobject with x properties is not in list ? how
}
private static List<Fruit> CreateFruitList()
{
//Build test data
var f01 = new Fruit {Name = "Apple",Id = 1, etc...};
var f02 = new Fruit {Name = "Apple",Id = 2, etc...};
var f03 = new Fruit {Name = "Apple",Id = 3, etc...};
var fruitList = new List<Fruit> {f01, f02, f03};
return fruitList;
}
If you don't see what test you should start with, it's probably that you didn't think of what your functionality should do in simple terms. Try to imagine a prioritized list of basic behaviors that are expected.
What's the first thing you would expect from a Delete() Method ? If you were to ship the Delete "product" in 10 minutes, what would be the non-negotiable behaviour included ? Well... probably that it deletes the element.
So :
1) [Test]
public void Fruit_Is_Removed_From_List_When_Deleted()
When that test is written, go through the whole TDD loop (execute test => red ; write just enough code to make it pass => green ; refactor => green)
Next important thing related to this is that the method shouldn't modify the list if the fruit passed as an argument is not in the list. So next test could be :
2) [Test]
public void Invalid_Fruit_Changes_Nothing_When_Deleted()
Next thing you specified is that ids should be rearranged when a fruit is deleted :
3) [Test]
public void Fruit_Ids_Are_Reordered_When_Fruit_Is_Deleted()
What to put in that test ? Well, just set up a basic but representative context that will prove your method behaves as expected.
For example, create a list of 4 fruits, delete the first and check one by one that the 3 remaining fruits ids are reordered properly. That would pretty well cover the basic scenario.
Then you could create unit tests for error or borderline cases :
4) [Test]
public void Fruit_Ids_Arent_Reordered_When_Last_Fruit_Is_Deleted()
5) [Test]
[ExpectedException]
public void Exception_Is_Thrown_When_Fruit_List_Is_Empty()
...
Before you actually start writing your first test, you are supposed to have a rough idea about the structure / design of your app, the interfaces etc. The design phase is often sort of implied with TDD.
I guess for an experienced developer it is sort of obvious, and reading a problem specification (s)he immediately starts to visualize the design of the solution in his/her head- this may be the reason why it is often sort of taken for granted. However, for a not so experienced developer, the design activity may need to be a more explicit undertaking.
Either way, after the first sketch of design is ready, TDD can be used both to verify behaviour and check the soundness / usability of the design itself. You may start writing your first unit test, then realize "oh, it is actually pretty awkward to do this with the interface I envisioned" - then you go back and redesign the interface. It is an iterative approach.
Josh Bloch talks about this in "Coders at Work" - he usually writes a lot of use cases for his interfaces even before starting to implement anything. So he sketches the interface, then writes code which uses it in all the different scenarios he can think of. It is not compilable yet - he uses it simply to get a feel for whether or not his interface is really helping to accomplish things easily.
Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea?
You've missed the boat.
I don't quite get how the test becomes before the code if you don't know what structures and how you're storing data
This is the point I think you need to return to, if you want the ideas to make sense.
First point: data structures and storage derive from what you need the code to do, not the other way around. In more detail, if you are starting from scratch there are any number of structure/storage implementations you can use; indeed, you should be able to swap between them without needing to change your tests.
Second point: In most cases, you consume your code more often than you produce it. You write it once, but you (and your colleagues) call it many times. Therefore, the convenience of calling the code ought to get a higher priority than it would if you were writing your solution purely from the inside out.
So when you find yourself writing a test, and discovering that the client implementation is ugly/clumsy/unsuitable, it sets off a warning for you before you've even started to implement anything. Likewise, if you find yourself writing a lot of setup code in your tests, it tells you that you haven't really got your concerns well separated. When you find yourself saying "wow, that test was easy to write", then you've probably got an interface that's easy to use.
It's very hard to reach this when you are using implementation oriented examples (like writing a test for a container). What you need is a well bounded toy problem, independent of implementation.
For a trivial example, you might consider an authentication manager - pass in an identifier and a secret, and find out whether the secret matches the identifier. So you should be able to write three quick tests right off the top: verify that the correct secret allows access, verify that an incorrect secret forbids access, verify that when a secret is changed, only the new version allows access.
So you perhaps write some simple tests with usernames and passwords. And as you do so, you realize that secrets shouldn't be limited to strings, but that you should be able to make a secret from anything serializable, and that maybe access isn't universal, but restricted (does that concern the authentication manager? maybe not) and oh you'll want to demonstrate that the secrets are kept safely....
You can, of course, take this same approach for containers. But I think you'll find it easier to "get it" if you start from a user/business problem, rather than an implementation problem.
Unit tests that verify a specific implementation ("Do we have a fence post error here?") have value. The process for creating those is much more like "guess a bug, write a test to check for the bug, react if the test fails". These tests tend not to contribute to your design, though - you're much more likely to be cloning a code block and changing some inputs. It's often the case, though, that when unit tests follow implementation, they are often difficult to write and have large startup costs ("why do I need to load three libraries and start a remote web server to test a fencepost error in my for loop?").
Recommended Reading Freeman/Pryce, Growing Object-Oriented Software, Guided By Tests
Since you're using C#, I'll assume that NUnit is your test framework. In that case, you have a range of Assert[..] statements at your disposal.
With respect to specifics of your code: I wouldn't reassign the IDs, or change the make-up of the remaining Fruit objects in any way when manipulating the list. If you need the id to keep track of the object's position in the list, use .IndexOf() instead.
With TDD, I find that writing the test first is often kind of hard to do -- I end up writing the code first (code, or string of hacks that is). A good trick then is to take that "code", and use it as the test. Then write your actual code again, slightly differently. This way you will have two different pieces of code which accomplish the same thing -- less chance of making the same mistake in production and test code. Also, having to come up with a second solution for the same problem may show you weaknesses in your original approach, and lead to better code.
You will never be certain that your unit test covers all eventualities, so it's more or less your personal measure as to how extensively you test and also what exactly. Your unit test should at least test the border cases, which you're not doing there. What happens when you try to delete an Apple with an invalid id? What happens if you have an empty list, what if you delete the first/last item, etc.
In general, I don't see much point in testing a single special case as you do above. Instead I always try run a bunch of tests, which in your example case suggests a slightly different approach:
First, write a checker method. You can do this as soon as you know that you will have a list of fruits and that in this list all fruits will have successive IDs (it's like testing if the list is sorted). No code for deletion has to be written for that, plus you can later reuse it f.ex. in unit-testing insertion code.
Then, create a bunch of different (maybe random) test lists (empty size, average size, large size). This also requires no prior code for deletion.
Finally, run specific deletions for each of the test lists (delete with invalid id, delete id 1, delete last id, delete random id) and check the result with your checker method. At this point you should at least know the interface for your deletion method, but it does not need to have been written already.
@Update with respect to comment: The checker method is more of a consistency check on the datastructure. In your example, all fruits in the list have successive IDs, so that's checked. If you have a DAG structure, you might want to check its acyclicity, etc.
Testing whether deletion of ID x worked depends on whether it was present in the list at all, and whether your application distinguishes the case of a failed deletion due to invalid ID from a successful one (as either way there is no such ID left in the end). Clearly, you also want to verify that a deleted ID is no longer present in the list (though that is not part of what I meant with the checker method - instead I thought it obvious enough to omit).
[Test]
public void DeleteFruit()
{
var fruitList = CreateFruitList();
var fm = new FruitManager(fruitList);
var resultList = fm.DeleteFruit(2);
//Assert that fruitobject with x properties is not in list
Assert.IsEqual(fruitList[2], fm.Find(2));
}
private static List<Fruit> CreateFruitList()
{
//Build test data
var f01 = new Fruit {Name = "Apple",Id = 1, etc...};
var f02 = new Fruit {Name = "Apple",Id = 2, etc...};
var f03 = new Fruit {Name = "Apple",Id = 3, etc...};
return new List<Fruit> {f01, f02, f03};
}
You might try some dependency injection of the fruit list. The fruit manager object is a crud store. So if you have a delete operation you need a retrieve operation.
Concerning the reordering, do you want it to happen automatically or do you want a resort operation. The automatically also can be as soon as delete operation occurs or a lazy only when retrieving. That is an implementation detail. There is a lot more that can be said about this. A good start on getting a handle on this specific example would be to use Design By Contract.
[Edit 1a]
Also you might want to consider why your testing for specific implementations of Fruit. FruitManager
should be managing an abstract concept called Fruit
. You need to watch out for premature implementation details unless your looking to go the route of using DTOs, but the problem with this is that Fruit
eventually might change from an object with getters to an object with actual behavior. Now not only will your tests for Fruit
fail, but FruitManager
will fail!
Start with the interface, have a skeleton concrete implementation. For each method / property / event / constructor, there is expected behaviour. Start with a specification for the first behaviour, and complete it:
[Specification] is same as [TestFixture] [It] is same as [Test]
[Specification]
When_fruit_manager_has_delete_called_with_existing_fruit : FruitManagerSpecifcation
{
private IEnumerable<IFruit> _fruits;
[It]
public void Should_remove_the_expected_fruit()
{
Assert.Inconclusive("Please implement");
}
[It]
public void Should_not_remove_any_other_fruit()
{
Assert.Inconclusive("Please implement");
}
[It]
public void Should_reorder_the_ids_of_the_remaining_fruit()
{
Assert.Inconclusive("Please implement");
}
/// <summary>
/// Setup the SUT before creation
/// </summary>
public override void GivenThat()
{
_fruits = new List<IFruit>();
3.Times(_fruits.Add(Mock<IFruit>()));
this._fruitToDelete = _fruits[1];
// this fruit is injected in th Sut
Dep<IEnumerable<IFruit>>()
.Stub(f => ((IEnumerable)f).GetEnumerator())
.Return(this.Fruits.GetEnumerator())
.WhenCalled(mi => mi.ReturnValue = this._fruits.GetEnumerator());
}
/// <summary>
/// Delete a fruit
/// </summary>
public override void WhenIRun()
{
Sut.Delete(this._fruitToDelete);
}
}
The above Specification is just adhoc and INCOMPLETE, but this is a nice behavior TDD way of approaching each unit / specification.
Here would be part of the unimplemented SUT when you first start working on it:
public interface IFruitManager
{
IEnumerable<IFruit> Fruits { get; }
void Delete(IFruit);
}
public class FruitManager : IFruitManager
{
public FruitManager(IEnumerable<IFruit> fruits)
{
//not implemented
}
public IEnumerable<IFruit> Fruits { get; private set; }
public void Delete(IFruit fruit)
{
// not implemented
}
}
So as you can see no real code is written. If you want to complete that first "When_..." specificaiton, you actually first have to do a [ConstructorSpecification] When_fruit_manager_is_injected_with_fruit() because the injected fruits are not being assigned to the Fruits property.
So voila, no REAL code is necessary to implement at first... the only thing needed now is discipline.
One thing I love about this, is if you need additional classes during implementation of current SUT, you don't have to implement those before you implement the FruitManager because you can just use mocks like for example ISomeDependencyNeeded... and when you complete Fruit manager then you can go and work on the SomeDependencyNeeded class. Pretty wicked.
精彩评论