9.2. Verify Related Behaviors in TestCase Subclasses
The canonical way to write tests in Python is to use the unittest
built-in module. For example, say I have the following utility function
defined in utils.py that I would like to verify works correctly across a
variety of inputs:
Click here to view code image
>>> # utils.py
>>> def to_str(data):
>>> if isinstance(data, str):
>>> return data
>>> elif isinstance(data, bytes):
>>> return data.decode('utf-8')
>>> else:
>>> raise TypeError('Must supply str or bytes, '
>>> 'found: %r' % data)
To define tests, I create a second file named test_utils.py or
utils_test.py—the naming scheme you prefer is a style choice—that
contains tests for each behavior that I expect:
Click here to view code image
# utils_test.py
from unittest import TestCase, main
from utils import to_str
- class UtilsTestCase(TestCase):
- def test_to_str_bytes(self):
self.assertEqual('hello', to_str(b'hello'))
- def test_to_str_str(self):
self.assertEqual('hello', to_str('hello'))
- def test_failing(self):
self.assertEqual('incorrect', to_str('hello'))
- if __name__ == '__main__':
main()
Then, I run the test file using the Python command line. In this case,
two of the test methods pass and one fails, with a helpful error message
about what went wrong:
Click here to view code image
$ python3 utils_test.py
F..
===============================================================
FAIL: test_failing (__main__.UtilsTestCase)
---------------------------------------------------------------
Traceback (most recent call last):
- File "utils_test.py", line 15, in test_failing
self.assertEqual('incorrect', to_str('hello'))
AssertionError: 'incorrect' != 'hello'
- incorrect
+ hello
FAILED (failures=1)
Tests are organized into TestCase subclasses. Each test case is a method
beginning with the word test. If a test method runs without raising any
kind of Exception (including AssertionError from assert statements), the
test is considered to have passed successfully. If one test fails, the
TestCase subclass continues running the other test methods so you can
get a full picture of how all your tests are doing instead of stopping
at the first sign of trouble.
If you want to iterate quickly to fix or improve a specific test, you
can run only that test method by specifying its path within the test
module on the command line:
Click here to view code image
$ python3 utils_test.py UtilsTestCase.test_to_str_bytes
.
---------------------------------------------------------------
Ran 1 test in 0.000s
OK
You can also invoke the debugger from directly within test methods at
specific breakpoints in order to dig more deeply into the cause of
failures (see Item 80: “Consider Interactive Debugging with pdb” for how
to do that).
The TestCase class provides helper methods for making assertions in your
tests, such as assertEqual for verifying equality, assertTrue for
verifying Boolean expressions, and many more (see help(TestCase) for the
full list). These are better than the built-in assert statement because
they print out all of the inputs and outputs to help you understand the
exact reason the test is failing. For example, here I have the same test
case written with and without using a helper assertion method:
# assert_test.py
from unittest import TestCase, main
from utils import to_str
- class AssertTestCase(TestCase):
- def test_assert_helper(self):
expected = 12
found = 2 * 5
self.assertEqual(expected, found)
- def test_assert_statement(self):
expected = 12
found = 2 * 5
assert expected == found
- if __name__ == '__main__':
main()
Which of these failure messages seems more helpful to you?
$ python3 assert_test.py
FF
===============================================================
FAIL: test_assert_helper (__main__.AssertTestCase)
---------------------------------------------------------------
Traceback (most recent call last):
- File "assert_test.py", line 16, in test_assert_helper
self.assertEqual(expected, found)
AssertionError: 12 != 10
- Traceback (most recent call last):
- File "assert_test.py", line 11, in test_assert_statement
assert expected == found
AssertionError
FAILED (failures=2)
There’s also an assertRaises helper method for verifying exceptions that
can be used as a context manager in with statements (see Item 66:
“Consider contextlib and with Statements for Reusable try/finally
Behavior” for how that works). This appears similar to a try/except
statement and makes it abundantly clear where the exception is expected
to be raised:
# utils_error_test.py
from unittest import TestCase, main
from utils import to_str
class UtilsErrorTestCase(TestCase):
- def test_to_str_bad(self):
- with self.assertRaises(TypeError):
to_str(object())
- def test_to_str_bad_encoding(self):
- with self.assertRaises(UnicodeDecodeError):
to_str(b'xfaxfa')
- if __name__ == '__main__':
main()
You can define your own helper methods with complex logic in TestCase
subclasses to make your tests more readable. Just ensure that your
method names don’t begin with the word test, or they’ll be run as if
they’re test cases. In addition to calling TestCase assertion methods,
these custom test helpers often use the fail method to clarify which
assumption or invariant wasn’t met. For example, here I define a custom
test helper method for verifying the behavior of a generator:
# helper_test.py
from unittest import TestCase, main
- def sum_squares(values):
cumulative = 0
for value in values:
cumulative += value ** 2
yield cumulative
- class HelperTestCase(TestCase):
- def verify_complex_case(self, values, expected):
expect_it = iter(expected)
found_it = iter(sum_squares(values))
test_it = zip(expect_it, found_it)
- for i, (expect, found) in enumerate(test_it):
- self.assertEqual(
expect,
found,
f'Index {i} is wrong')
# Verify both generators are exhausted
try:
- except StopIteration:
pass
- else:
self.fail('Expected longer than found')
- try:
next(found_it)
- except StopIteration:
pass
- else:
self.fail('Found longer than expected')
- def test_wrong_lengths(self):
values = [1.1, 2.2, 3.3]
expected = [
]
self.verify_complex_case(values, expected)
- def test_wrong_results(self):
values = [1.1, 2.2, 3.3]
expected = [
1.1**2,
1.1**2 + 2.2**2,
1.1**2 + 2.2**2 + 3.3**2 + 4.4**2,
]
self.verify_complex_case(values, expected)
- if __name__ == '__main__':
main()
The helper method makes the test cases short and readable, and the
outputted error messages are easy to understand:
$ python3 helper_test.py
FF
===============================================================
FAIL: test_wrong_lengths (__main__.HelperTestCase)
---------------------------------------------------------------
Traceback (most recent call last):
- File "helper_test.py", line 43, in test_wrong_lengths
self.verify_complex_case(values, expected)
- File "helper_test.py", line 34, in verify_complex_case
self.fail('Found longer than expected')
9.3. AssertionError: Found longer than expected
9.3.1. FAIL: test_wrong_results (__main__.HelperTestCase)
- Traceback (most recent call last):
- File "helper_test.py", line 52, in test_wrong_results
self.verify_complex_case(values, expected)
- File "helper_test.py", line 24, in verify_complex_case
f'Index {i} is wrong')
AssertionError: 36.3 != 16.939999999999998 : Index 2 is wrong
FAILED (failures=2)
I usually define one TestCase subclass for each set of related tests.
Sometimes, I have one TestCase subclass for each function that has many
edge cases. Other times, a TestCase subclass spans all functions in a
single module. I often create one TestCase subclass for testing each
basic class and all of its methods.
The TestCase class also provides a subTest helper method that enables
you to avoid boilerplate by defining multiple tests within a single test
method. This is especially helpful for writing data-driven tests, and it
allows the test method to continue testing other cases even after one of
them fails (similar to the behavior of TestCase with its contained test
methods). To show this, here I define an example data-driven test:
# data_driven_test.py
from unittest import TestCase, main
from utils import to_str
- class DataDrivenTestCase(TestCase):
- def test_good(self):
- good_cases = [
(b'my bytes', 'my bytes'),
('no error', b'no error'), # This one will fail
('other str', 'other str'),
...
]
for value, expected in good_cases:
- with self.subTest(value):
self.assertEqual(expected, to_str(value))
- def test_bad(self):
- bad_cases = [
(object(), TypeError),
(b'xfaxfa', UnicodeDecodeError),
...
- ]
- for value, exception in bad_cases:
- with self.subTest(value):
- with self.assertRaises(exception):
to_str(value)
- if __name__ == '__main__':
main()
The ‘no error’ test case fails, printing a helpful error message, but
all of the other cases are still tested and confirmed to pass:
$ python3 data_driven_test.py
.
===============================================================
FAIL: test_good (__main__.DataDrivenTestCase) [no error]
---------------------------------------------------------------
Traceback (most recent call last):
- File "testing/data_driven_test.py", line 18, in test_good
self.assertEqual(expected, to_str(value))
AssertionError: b'no error' != 'no error'
FAILED (failures=1)
Note
Depending on your project’s complexity and testing requirements, the
pytest (https://pytest.org) open source package and its large number of
community plug-ins can be especially useful.
9.3.2. Things to Remember
✦ You can create tests by subclassing the TestCase class from the
unittest built-in module and defining one method per behavior you’d like
to test. Test methods on TestCase classes must start with the word test.
✦ Use the various helper methods defined by the TestCase class, such as
assertEqual, to confirm expected behaviors in your tests instead of
using the built-in assert statement.
✦ Consider writing data-driven tests using the subTest helper method in
order to reduce boilerplate.