开发者

<100% Test coverage - best practices in selecting test areas [closed]

开发者 https://www.devze.com 2022-12-26 20:57 出处:网络
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing
Closed. This question is opinion-based. It is not currently accepting answers.

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 5 years ago.

Improve this question 开发者_JAVA技巧

Suppose you're working on a project and the time/money budget does not allow 100% coverage of all code/paths.

It then follows that some critical subset of your code needs to be tested. Clearly a 'gut-check' approach can be used to test the system, where intuition and manual analysis can produce some sort of test coverage that will be 'ok'.

However, I'm presuming that there are best practices/approaches/processes that identify critical elements up to some threshold and let you focus your test elements on those blocks.

For example, one popular process for identifying failures in manufacturing is Failure Mode and Effects Analysis. I'm looking for a process(es) to identify critical testing blocks in software.


100% code coverage is not a desirable goal. See this blog for some reasons.

My best practice is to derive test cases from use cases. Create concrete traceability (I use a UML tool but a spreadsheet will do as well) between the use cases your system is supposed to implement and test cases that proves that it works.

Explicitly identify the most critical use cases. Now look at the test cases they trace to. Do you have many test cases for the critical use cases? Do they cover all aspects of the use case? Do they cover negative and exception cases?

I have found that to be the best formula (and best use of the team's time) for ensuring good coverage.

EDIT:

Simple, contrived example of why 100% code coverage does not guarantee you test 100% of cases. Say CriticalProcess() is supposed to call AppendFile() to append text but instead calls WriteFile() to overwrite text.

[UnitTest]
Cover100Percent()
{
    CriticalProcess(true, false);
    Assert(FileContents("TestFile.txt") == "A is true");

    CriticalProcess(false, true);
    Assert(FileContents("TestFile.txt") == "B is true");

    // You could leave out this test, have 100% code coverage, and not know
    // the app is broken.
    CriticalProcess(true, true);
    Assert(FileContents("TestFile.txt") == "A is trueB is true");
}

void CriticalProcess(bool a, bool b)
{
    if (a)
    {
        WriteFile("TestFile.txt", "A is true");
    }

    if (b)
    {
        WriteFile("TestFile.txt", "B is true");
    }
}


Unless you're doing greenfield development using TDD, you are unlikely to get (or want) 100% test coverage. Code coverage is more of a guideline, something to ask "what haven't I tested?"

You may want to look at other metrics, such as cyclomatic complexity. Find the complex areas of your code and test those (then refactor to simplify).


There are 3 main components which you should be aware:

  • important features - you should know what is more critical. Ask yourself ""How screwed would I (or my customer) be if there's a bug in this component/code snippet?". Your customer could probably help you on determining these kind of priorities. Things that deal directly with money tend to follow in this case
  • frequently used features - The most common use cases should be as bug-free as possible. Nobody cares if there's a bug in a part of the system no one uses
  • most complex features - The developers usually have a good idea of which parts of the code are more likely to contain bugs. You should give special attention to those.

If you have this info, then it probably won't be hard choosing how to distribute your testing resources.


False sense of security: You should be always aware of the fact that test coverage can mislead to false sense of security. A great article about this fact can be found in the disco blog. That said relying on the information of "green" indicators allows you to miss untested paths.

Good Indicator for untested paths: On the other hand missing test coverage most times displayed in red always is a great indicator for paths that are not covered. You might check these first because they are easy to spot and allow you to evaluate whether you want to add test coverage here or not.

Code centric approach to identify critical elements: There is a great tooling support availible to help you find the mess and possible gotchas in your code. You might want to have a look at the IntelliJ IDE and its code analysis features or for example at Findbugs, Checkstyle and PMD. A great tool that combines these static code analyzing tools that is available for free is Sonar.

Feature centric approch to identify critical elements: Evaluate your software and break it down into features. Ask yourself questions like: "What features are most important and should be most reliable? Where do we have to take care of the correctness of results? Where would a bug or failure be most destructive to the software?"


Maybe the best hint that a module is insufficiently covered is bug reports against it. Any module you're editing time and again should be well-covered. But cyclomatic complexity correlates pretty well with bug frequency, too - and you can measure that before the bugs show up!


If you have a legacy code-base, a good place to start is:

  • Add a unit test for every bug that you find and fix. The unit test should reproduce the bug, then you fix the code, and use the unit test to verify that it is fixed, and then to be sure in future that it doesn't break again for any reason.

  • Where possible, add tests to major high-level components so that many low-level breakages will still cause a unit test failure (e.g. instead of testing every database acess routine independently, add one test that creates a database, adds 100 users, deletes 50 of them, verifies the result, and drops the database. You won't easily see where the failure is (you'll have to debug to work out why it failed) but at least you know that you have a test that exercises the overall database system and will warn you quickly if anything major goes wrong in that area of the code. Once you have the higher level areas covered, you can worry about delving deeper.

  • Add unit tests for your new code, or when you modify any code.

Over time, this in itself will help you build up coverage in the more important places.

(Bear in mind that if your codebase is working code that has been working for years, then for the most part you don't "need" unit tests to prove that it works. If you just add unit tests to everything, they will pretty much all pass and therefore won't tell you much. Of course, over time as your coverage grows, you may start to detect regressions from those tests, and you will find bugs through the process of adding unit tests for previously untested code, but if you just slog through the code blindly adding unit tests for everything, you'll get a very poor cost-per-bug-fixed ratio)


To all the 90% coverage tester:

The problem with doing so is that the 10% hard to test code is also the not-trivial code that contains 90% of the bug! This is the conclusion I got empirically after many years of TDD.

And after all this is pretty straightforward conclusion. This 10% hard to test code, is hard to test because it reflect tricky business problem or tricky design flaw or both. These exact reasons that often leads to buggy code.

But also:

  • 100% covered code that decreases with time to less than 100% covered often pinpoints a bug or at least a flaw.
  • 100% covered code used in conjunction with contracts, is the ultimate weapon to lead to live close to bug-free code. Code Contracts and Automated Testing are pretty much the same thing
  • When a bug is discovered in 100% covered code, it is easier to fix. Since the code responsible for the bug is already covered by tests, it shouldn't be hard to write new tests to cover the bug fix.


It depends entirely on the type of software that you are developing. If it is remotely accessible then security testing should be the highest priority. In the case of web applications there are automated tests such as Sitewatch or Wapiti which can be used. There are also tools to help generate unit tests for SOAP.

0

精彩评论

暂无评论...
验证码 换一张
取 消