back to posts

I had a test case with unintended side-effects and it ruined my test results

tl;dr: I had some cache side-effects in my tests that made my tests fail and also depend on test order. Remember to use function.cache_clear when you use @lru_cache on a function, even when that function is an instance method.

The problem

So like the good programmer that I am, I was recently writing tests for a Django project. One of my models has a method that calculates something that takes a while and I wanted to make sure that computation is only done once for each instance. Let's say our model looks like this:

class MyModel(models.Model):

    number = models.BigIntegerField()

    def compute(self):
        result = 1
        for i in range(number):
            result *= i

That computation is going to be the same every time we call it for a loaded instance, so let's cache it. There are several ways to do that: we could roll our own cache, make it a @cached_property or use the Python-built-in @lru_cache. I used the last one, because I wanted this to stay a regular method on the model instance.

Afterwards I wrote tests using pytest and its fixture mechanism. Something along these lines:

def instance():
    random_number = random.randrange(987654321, 98765432123456789)
    return MyModel.objects.create(number=random_number)

def test_max(instance):
    assert == instance.compute()    # <-- remember this line!

def test_min(instance):
    assert instance.compute() > 0

I executed the tests and voilà, it works. Nice!

A little while later, I worked some more on the tests and changed their order. I tried it again and now it fails, because for some reason, != instance.compute() any more, even though the code clearly states that instance.compute() always returns the instance ID.

(Side note: Of course, I didn't have this simplified example, my test was testing an output of a view function, so I had to wade through the view, the template and the model to find what was going wrong. And my test output was worse than this, too, because the output was something that is impossible to create with a database, and the database queries I did while debugging confirmed that the data in the database was correct, it was just the output of one function that had changed.).


Finding the bug

So, I did what every good programmer does and inserted some print statements. These confirmed that my instance was correct, that all my connected foreign items were correct, that my fixtures were correct and finally, that the fault was in the function MyModel.compute, which would return wrong data when the tests are executed in wrong order.

It took me a while to see what happens here, and the culprit is the @lru_cache, which contains state that is not reset after each test. What @lru_cache does is that it creates a result cache for the given function that looks up results according to the input data. I had assumed that, since this is an instance method, it would cache results for each instance, and it sort-of does: it caches results by the input value of self. Each test gets a new self here, so we should be fine, right? Sadly, no. After each test, the django database is reset, including the ID sequences. So each test generates a new MyModel instance that will have the same ID, and will thus be identified as the same object for the lru_cache. So, I had a leak in my test isolation and saw the results of previous test cases whenever I called that method.

The solution

Once we know what the problem is, it is easy to solve. We can either switch to @cached_property, which does what I want it to (ie. calculate the value once for an instance and then never again), or reset the lru_cache after each test, which, luckily, is possible and very simple. I changed my fixture to be this:

def instance():
    random_number = random.randrange(987654321, 98765432123456789)
    instance = MyModel.objects.create(number=random_number)
    yield instance



And that solves our problem by resetting the cache after each test.

The lesson

So what did I learn?

There's state hiding everywhere, especially if I don't really think about what I'm doing. When I found this problem, I was annoyed at first because I thought there was a problem with my tests. But the tests actually showed me a problem with my program, so they worked brilliantly here. By now, I have eliminated that state (which would have impacted the actual program in subtle and serious ways), so I'm much more confident now in my program.

So, testing helps! Do it!