开发者

Can Python unittest automatically reattempt a failed testcase / suite?

开发者 https://www.devze.com 2023-03-30 11:57 出处:网络
Short Question Is it possible to re-attempt a unittest upon failure / error N number of times OR based on a predefined function. (Like a user\'s prompt)

Short Question

Is it possible to re-attempt a unittest upon failure / error N number of times OR based on a predefined function. (Like a user's prompt)

Background

To avoid retyping an entire page of system information, please see SO question on passing data to unittests and on auto test discovery for more details on my physical set up.

Regarding the question at hand, I know I can do it by re-writting my test cases to loop until it gets the required results (see pseudo code below) then assert based off of this. However I rather not go and rewrite 100's of test cases.

I there will be someone that will point out that if a unittest fails, it should just fail and be done. I agree with this 100% if human error could be removed. This is a physical system that I am connected to and there are many times the leads from the digital multimeter are not connected well and it could fail because of a loose connection.

Pseudo Workaround

class Suite_VoltageRegulator(unittest.TestCase):
    def test_voltage_5v_regulator(self):   
        keep_running = 'y'
        error_detected = False

        print '\n'     
        # Display User Test Configuration
        msg = \
            '1) Connect Front DMM GND(black) to the TP_COM\n' +\
            '2) Connect Front DMM POS(red) to the TP-A\n' +\
            '3) Ensure the DMM terminal button indicates FRONT'

        continue_test = prompt.Prompt_Ok_Cancel('User Action Required!', msg)

        if not continue_test:
            self.assertTrue(False, 'User Canceled Test!')

        while(keep_running == 'y'):
            try:
                # Run the test
                results = measure_voltage_from_system() 

                # Analyze Results
                test_status = pf.value_within_range(results, REGULATOR_5V_LOW, REGULATOR_5V_HIGH)

            catch:
                error_detected = True

            # Retest Failed Cards
            if(test_status == False):
                keep_running = rawinput('Test FAILED: Retry Test? (y/n):')
    开发者_运维问答        elif(error_detected == True):
                keep_running = rawinput('Test ERROR: Retry Test? (y/n):')
            else:
                keep_running = 'n'

        # Inform the user on the test results 
        self.assertTrue(test_status,  'FAIL: 5V Regulator (' +str(results)+ ') Out of Range!')

EDIT 8/22/11 3:30 PM CST

I do know that I am violating the definition of the unittest in this use-case. These issues / comments are also addressed in a few of my other SO questions. One of the design goals we chose was to leverage an existing framework to avoid having to "reinvent the wheel". The fact that we chose python's unittest was not based on it's definition, but it's flexibility and robustness to execute and display a series of tests.

Going into to this project, I knew there would be some things that would require workarounds because this module was not intended for this use. At this point in time, I still believe that these workarounds have been easier / cheaper than rewriting my own test runner.

EDIT 8/22/11 5:22 PM CST

I am not dead set on using unittest for future projects in this manner, however I fairly set on using an existing frame work to avoid duplicating someone else's efforts. A comment below is an example of this pycopia-QA appears to be a good fit for this project. The only drawback for my current project is it is that I have written hundreds unittest test-cases, if I were to rewrite them it would be a very large undertaking (noting that it will also be a non-funded effort)

EDIT 8/24/11 11:00 AM CST

It may be clear for future projects to switch to a more tailored frame work for this type of testing. However I still have projects running with unittest so a solution using only unittest (or nose + 3rd addon) is still needed.


4 years after the original question - I hope that anyone would care :) Here's my solution for doing this on top of unittest. It's kind of ugly and relies on the implementation of the TestCase base class , but it works.

class MyTest(unittest.TestCase):
    ###
    ### Insert test methods here
    ###

    # Wrapping each test method so that a retry would take place.  
    def run(self, result=None):
        self.origTestMethodName = self._testMethodName
        self._testMethodName = "_testRetryWrapper"
        super(MyTest, self).run(result)
        self._testMethodName = self.origTestMethodName

    def _testRetryWrapper(self):
        testMethod = getattr(self, self.origTestMethodName)
        retryAttemptsLeft = settings.testRetryCount

        while True:
            try:
                testMethod()
                break
            except:
                if retryAttemptsLeft == 0:
                    raise
                else:
                    retryAttemptsLeft = retryAttemptsLeft - 1


The Python unittest module in intended for writing Python unit tests. ;-) It's not so well suited for other kinds of testing. The nose package is also a unit test framework.

I have written several testing frameworks in Python that are designed to test systems. The systems can be distributed, and automated with various interfaces. Two are open-source.

The Pycopia project is a collection of Python modules that runs on Linux. It is provided as a collection of namespace subpackages, one of which is the QA package that is a testing framework.

A subset-fork of this is named powerdroid, and it is intended to control instrumentation for taking physical measurements (such as voltage, current, etc.) via. RS-232, IEEE-488,etc. It provides an alternative Python interface to the linux-gpib project.

So you may start with these, rather than "reinvent the wheel", if you want. You may not have to throw away existing tests, since the framework can invoke any subprocess you can start existing tests with it. This also runs on Linux.


I have improved Shlomi Király's answer slightly so that it doesn't violate with the unittest framework and skipping testcases still works:

class MyTest(unittest.TestCase):

#Eanble retries if specified in configuration file by attribute testRetryCount
def run(self, result=None):
    self.origTestMethodName = self._testMethodName
    retryAttemptsLeft = configuration.testRetryCount

    failuresBefore = len(result.failures) #check how many tests that are marked as failed before starting
    errorsBefore = len(result.errors) #check how many tests that are marked as failed before starting

    super(MyTest, self).run(result)
    if failuresBefore < len(result.failures): # If last test failed
        while True:
            if retryAttemptsLeft == 0:
                self.logger.error("Test failed after "+str(configuration.testRetryCount+1)+" attempts")
                break
            else:
                result.failures.pop(-1) #Removing last failure result
                self.logger.error("Test failed - retryAttemptsLeft: "+str(retryAttemptsLeft))
                retryAttemptsLeft = retryAttemptsLeft - 1

                super(MyTest, self).run(result)

    elif errorsBefore < len(result.errors): # If last test failed due to error
        while True:
            if retryAttemptsLeft == 0:
                self.logger.error("Test error after "+str(configuration.testRetryCount+1)+" attempts")
                break
            else:
                result.errors.pop(-1) #Removing last error result
                self.logger.error("Test error - retryAttemptsLeft: "+str(retryAttemptsLeft))
                retryAttemptsLeft = retryAttemptsLeft - 1

                super(MyTest, self).run(result)


I think that you should write your own specialised framework - you might as well model it on python's unit test, but this is clearly not a unit test. You will need to make some changes, eg, to allow each test to be individually skipped, retried, or reverified - and then likely, at the end, to have an option to revisit and retest failed tests with statistics at the end showing how many tests passed the first time, how long they took, which test took the longest, how many retries they needed to take, and so on.


Here is a bit dirty, yet very simple solution. Use unittest to run the test along with other tests and to detect a potential crash. But instead of using the builtin assertions, write your own code with if and raise Exception. Then retry with tenacity. For example:

from tenacity import retry


class MyTestCase(unittest.TestCase):
    @retry
    def test_with_retry:
        if not a == b:
            raise Exception("The test has failed")


Turns out there is a very simple (yet private) wrapper for calling the test method, and it's quite easy to overwrite it yourself and easy to understand.

But if you don't mind adding a decorator to all tests methods, I would rather go with a decorator as shown in the other answer, because it doesn't involve overwriting private methods.

Better than retrying tests until they work would be of course to have a really good tearDown method that makes sure tests are not influencing each other, proper tests that are deterministic and code that doesn't suffer from a million race conditions.

class Test(unittest.TestCase):
    def _callTestMethod(self, method):
        attempts = 0
        max_attempts = 2
        while True:
            attempts += 1
            try:
                method()
                break
            except Exception as e:
                if attempts == max_attempts:
                    raise e

            # try again
            self.tearDown()
            self.setUp()

    def test_foo(self):
        # your test goes here, it will be repeated automatically if it fails


It is useful to distinguish between two cases

  1. Test is flaky. There is a way to write robust test, but we did not do it (yet), we may not exactly understand why test is flaky, and therefore we need to retry (the entire test)
  2. We test an action that needs to be awaited and there is no way to be reliably informed when it is completed, therefore we retry (a single check in the test)

For the first case, I am using

def flakey(issue: str, repeats: int = 3):
    """Decorator that marks test as flakey (flaky).
    If applied, executes the test up to three times, marking it as failed only if it fails each time.
    Note, that use of this decorator is generally discouraged---tests should pass reliably when their assertions are upheld.
    Example usage
        @flakey(issue='#123')
        def test_feature_in_fragile_manner():
            self.assertTrue(...)
    """
    del issue  # unused

    def decorator(f):
        @functools.wraps(f)
        def inner(*args, **kwargs):
            # Python 3 clears the exception upon leaving the `except` block
            # https://docs.python.org/3/reference/compound_stmts.html#the-try-statement
            preserved_exc = None
            for _ in range(repeats):
                try:
                    return f(*args, **kwargs)
                except AssertionError as exc:
                    preserved_exc = exc
                    logging.info("Test marked as flaky has failed: %s", exc)
            raise AssertionError("Flaky test has failed too many times") from preserved_exc

        return inner

    return decorator

The main features of this solution are

  1. comment that discourages programmers from using this (borrowed from https://bazel.build/reference/be/common-definitions#test.flaky)
  2. mandatory parameter to enter issue ID (task to fix the flaky test)
  3. no unnecessary sleeps, exponential retries, jitter
0

精彩评论

暂无评论...
验证码 换一张
取 消