开发者

Splitting a test to a set of smaller tests

开发者 https://www.devze.com 2022-12-27 15:58 出处:网络
I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to

I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way.

I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test.

An example of what I am aiming at:

//Class under test
class A {

  public void setB(B b){ this.b = b; }

  public Output process(Input i){
    return b.process(doMyProcessing(i));
  }

  private InputFromA doMyProcessing(Input i){ ..  }

  ..

}

//Another class under test
class B {

   public Output process(InputFromA i){ .. }

  ..

}

//The Big Test
@Test
public void theBigTest(){
 A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive

 Input i = createInput();

 Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive

 assertEquals(o, expectedOutput());
}

//The splitted tests

@PartlyDefine开发者_开发问答s("theBigTest") // <-- so something like this should come from the tool..
@Test
public void smallerTest1(){
  // this method is a bit too long but its just an example..
  Input i = createInput();
  InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
  Output expected = expectedOutput();  // this should be the same in both tests and it should be ensured somehow

  B b = mock(B.class);
  when(b.process(x)).thenReturn(expected);

  A classUnderTest = createInstanceOfClassA();
  classUnderTest.setB(b);

  Output o = classUnderTest.process(i);

  assertEquals(o, expected);
  verify(b).process(x);
  verifyNoMoreInteractions(b);
}

@PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
@Test
public void smallerTest2(){
  InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
  Output expected = expectedOutput();  // this should be the same in both tests and it should be ensured somehow

  B classUnderTest = createInstanceOfClassB();

  Output o = classUnderTest.process(x);

  assertEquals(o, expected);
}


The first suggestion that I'll make is to re-factor your tests on red (failing). To do so, you'll have to break your production code temporarily. This way, you know the tests are still valid.

One common pattern is to use a separate test fixture per collection of "big" tests. You don't have to stick to the "all tests for one class in one test class" pattern. If a a set of tests are related to each other, but are unrelated to another set of tests, then put them in their own class.

The biggest advantage to using a separate class to hold the individual small tests for the big test is that you can take advantage of setup and tear-down methods. In your case, I would move the lines you have commented with:

// this should be the same in both tests and it should be ensured somehow

to the setup method (in JUnit, a method annotated with @Before). If you have some unusually expensive setup that needs to be done, most xUnit testing frameworks have a way to define a setup method that runs once before all of the tests. In JUnit, this is a public static void method that has the @BeforeClass annotation.

If the test data is immutable, I tend to define the variables as constants.

Putting all this together, you might have something like:

public class TheBigTest {

    // If InputFromA is immutable, it could be declared as a constant
    private InputFromA x;
    // If Output is immutable, it could be declared as a constant
    private Output expected;

    // You could use 
    // @BeforeClass public static void setupExpectations()
    // instead if it is very expensive to setup the data
    @Before
    public void setUpExpectations() throws Exception {
      x = expectedInputFromA();
      expected = expectedOutput();
    }

    @Test
    public void smallerTest1(){
      // this method is a bit too long but its just an example..
      Input i = createInput();

      B b = mock(B.class);
      when(b.process(x)).thenReturn(expected);

      A classUnderTest = createInstanceOfClassA();
      classUnderTest.setB(b);

      Output o = classUnderTest.process(i);

      assertEquals(o, expected);
      verify(b).process(x);
      verifyNoMoreInteractions(b);
    }

    @Test
    public void smallerTest2(){
      B classUnderTest = createInstanceOfClassB();

      Output o = classUnderTest.process(x);

      assertEquals(o, expected);
    }

}


All I can suggest is the book xUnit Test Patterns. If there is a solution it should be in there.


theBigTest is missing the dependency on B. Also smallerTest1 mocks B dependency. In smallerTest2 you should mock InputFromA.

Why did you create a dependency graph like you did?

A takes a B then when A::process Input, you then post process InputFromA in B.

Keep the big test and refactor A and B to change the dependency mapping.

[EDIT] in response to remarks.

@mkorpela, my point is that by looking at the code and their dependencies is how you start to get an idea of how to create smaller tests. A has a dependency on B. In order for it to complete its process() it must use B's process(). Because of this, B has a dependency on A.

0

精彩评论

暂无评论...
验证码 换一张
取 消