This talk on Github:
pganssle-talks/2024-pycon-us-pytest
pytest for unittesters
✨ pytest is way too magical ✨
... but in practice this doesn't matter
unittest
: assert
statementsdef test_bad_assert(self):
a = 1
self.assertEqual(a, 2, "Custom error message")
$ python -m unittest
F
======================================================================
FAIL: test_bad_assert (test_bad_assert.Tests.test_bad_assert)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../test_bad_assert.py", line 10, in test_bad_assert
self.assertEqual(a, 2, "Custom error message")
AssertionError: 1 != 2 : Custom error message
----------------------------------------------------------------------
def test_bad_assert(self):
a = 1
assert a == 2, "Custom error message"
$ python -m unittest
F
======================================================================
FAIL: test_bad_assert (test_bad_assert.Tests.test_bad_assert)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../test_bad_assert.py", line 7, in test_bad_assert
assert a == 2
AssertionError: Custom error message
----------------------------------------------------------------------
unittest
: assert
statementsdef test_bad_assert():
a = 1
assert a == 2, "Custom error message"
$ pytest test_bad_assert.py F [100%] ================================== FAILURES ================================== ______________________________ test_bad_assert _______________________________ def test_bad_assert(): a = 1 > assert a == 2, "Custom error message" E AssertionError: Custom error message E assert 1 == 2 test_bad_assert.py:3: AssertionError ========================== short test summary info =========================== FAILED test_bad_assert.py::test_bad_assert - AssertionError: Custom error message ============================= 1 failed in 0.10s ==============================
python -O
)def test_bad_assert():
a = 1
assert (
a == 2,
"My very long error message doesn't fit on one line, gotta break it up"
)
def test_bad_assert():
a = 1
assert a == 2, \
"My very long error message doesn't fit on one line, gotta break it up"
def test_bad_assert():
a = 1
assert tuple(
a ==2,
"My very long error message doesn't fit on one line, gotta break it up"
)
$ pytest test_bad_assert.py ============================= test session starts ============================== test_bad_assert.py . [100%] =============================== warnings summary =============================== test_bad_assert.py:7 .../test_bad_assert.py:7: PytestAssertRewriteWarning: assertion is always true, perhaps remove parentheses? assert ( -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ========================= 1 passed, 1 warning in 1.13s =========================
assert
methodsdef test_special_asserts():
a = (1, 2, 3)
assert a is not None # self.assertIsNot(a, None)
assert a < (2, 3, 4) # self.assertLess(a, (2, 3, 4))
assert len(a) == 4 # self.assertLen(a, 4) - absltest extension
_____________________________ test_special_asserts _____________________________ def test_special_asserts(): a = (1, 2, 3) assert a is not None # self.assertIsNot(a, None) assert a < (2, 3, 4) # self.assertLess(a, (2, 3, 4)) > assert len(a) == 4 # self.assertLen(a, 4) - absltest extension E assert 3 == 4 E + where 3 = len((1, 2, 3)) test_special_assert.py:5: AssertionError
import pytest
def test_float_bad():
a = 0.1 + 0.2
assert a == 0.3
def test_float_good():
a = 0.1 + 0.2
assert a == pytest.approx(0.3)
________________________________ test_float_bad ________________________________ def test_float_bad(): a = 0.1 + 0.2 > assert a == 0.3 E assert 0.30000000000000004 == 0.3 test_floats.py:5: AssertionError
class Tests(unittest.TestCase):
def test_timestamp(self):
for dt_1, dt_2 in get_datetimes():
ts2 = dt_2.timestamp()
dt_rt = dt_1 + (dt_2 - dt_1)
self.assertEqual(ts2, dt_rt.timestamp())
class Tests(unittest.TestCase):
def test_timestamp(self):
for dt_1, dt_2 in get_datetimes():
ts2 = dt_2.timestamp()
dt_rt = dt_1 + (dt_2 - dt_1)
assert ts2 == dt_rt.timestamp()
$ python -m unittest test_error_message.py
F
======================================================================
FAIL: test_timestamp (test_error_message.Tests.test_timestamp)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../test_error_message.py", line 35, in test_timestamp
self.assertEqual(ts2, dt_rt.timestamp())
AssertionError: 1715822040.0 != 1715818440.0
----------------------------------------------------------------------
Ran 1 test in 0.003s
FAILED (failures=1)
============================= test session starts ============================== collected 1 item test_error_message.py F [100%] =================================== FAILURES =================================== ________________________________ test_timestamp ________________________________ def test_timestamp(): for dt_1, dt_2 in get_datetimes(): ts2 = dt_2.timestamp() dt_rt = dt_1 + (dt_2 - dt_1) > assert ts2 == dt_rt.timestamp() E AssertionError: assert 1715822040.0 == 1715818440.0 E + where 1715818440.0 = <built-in method timestamp of datetime object ...>() E + where <built-in method timestamp of datetime object...> = datetime(2024, 5, 15, 20, 14, tzinfo=ZoneInfo(key='America/New_York')).timestamp test_error_message.py:33: AssertionError =========================== short test summary info ============================ FAILED test_error_message.py::test_timestamp - AssertionError: assert 1715822040.0 == 1715818440.0 ============================== 1 failed in 0.12s ===============================
============================= test session starts ============================== collected 1 item test_error_message.py F [100%] =================================== FAILURES =================================== ________________________________ test_timestamp ________________________________ def test_timestamp(): for dt_1, dt_2 in get_datetimes(): ts2 = dt_2.timestamp() dt_rt = dt_1 + (dt_2 - dt_1) > assert ts2 == dt_rt.timestamp() E AssertionError: assert 1715822040.0 == 1715818440.0 E + where 1715818440.0 = <built-in method timestamp of datetime object...>() E + where <built-in method timestamp of datetime object...> = datetime(2024, 5, 15, 20, 14, tzinfo=ZoneInfo(key='America/New_York')).timestamp test_error_message.py:33: AssertionError =========================== short test summary info ============================ FAILED test_error_message.py::test_timestamp - AssertionError: assert 1715822040.0 == 1715818440.0 ============================== 1 failed in 0.20s ===============================
pytest
is compatible with unittest
$ python -m unittest
F
======================================================================
FAIL: test_special_asserts (test_special_methods.Tests.test_special_asserts)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../test_special_methods.py", line 8, in test_special_asserts
self.assertEqual(len(a), 4)
AssertionError: 3 != 4
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
$ pytest test_special_methods.py ============================= test session starts ============================== collected 1 item test_special_methods.py F [100%] =================================== FAILURES =================================== __________________________ Tests.test_special_asserts __________________________ self = <test_special_methods.Tests testMethod=test_special_asserts>
def test_special_asserts(self): a = (1, 2, 3) self.assertIsNot(a, None) self.assertLess(a, (2, 3, 4)) > self.assertEqual(len(a), 4) E AssertionError: 3 != 4
test_special_methods.py:8: AssertionError =========================== short test summary info ============================ FAILED test_special_methods.py::Tests::test_special_asserts - AssertionError: 3 != 4 ============================== 1 failed in 1.04s ===============================
$ pytest test_error_message.py ============================= test session starts ============================== collected 1 item test_error_message.py F [100%] =================================== FAILURES =================================== _____________________________ Tests.test_timestamp _____________________________ def test_timestamp(self): for dt_1, dt_2 in get_datetimes(): ts2 = dt_2.timestamp() dt_rt = dt_1 + (dt_2 - dt_1) > self.assertEqual(ts2, dt_rt.timestamp()) E AssertionError: 1715822040.0 != 1715818440.0 test_error_message.py:35: AssertionError =========================== short test summary info ============================ FAILED test_error_message.py::Tests::test_timestamp - AssertionError: 1715822040.0 != 1715818440.0 ============================== 1 failed in 0.93s ===============================
$ pytest test_error_message.py --showlocals ============================= test session starts ============================== collected 1 item test_error_message.py F [100%] =================================== FAILURES =================================== _____________________________ Tests.test_timestamp _____________________________ def test_timestamp(self): for dt_1, dt_2 in get_datetimes(): ts2 = dt_2.timestamp() dt_rt = dt_1 + (dt_2 - dt_1) > self.assertEqual(ts2, dt_rt.timestamp()) E AssertionError: 1715822040.0 != 1715818440.0 dt_1 = datetime.datetime(1970, 1, 1, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='America/New_York')) dt_2 = datetime.datetime(2024, 5, 16, 1, 14, tzinfo=datetime.timezone.utc) dt_rt = datetime.datetime(2024, 5, 15, 20, 14, tzinfo=zoneinfo.ZoneInfo(key='America/New_York')) self = <test_error_message.Tests testMethod=test_timestamp> ts2 = 1715822040.0 test_error_message.py:35: AssertionError =========================== short test summary info ============================ FAILED test_error_message.py::Tests::test_timestamp - AssertionError: 1715822040.0 != 1715818440.0 ============================== 1 failed in 0.92s ===============================
pytest
as a test runner: flags-x
: Exit on first failure--maxfail
: Exit after the first num
failures or errors--sw
/--stepwise
: Exit on test failure, then continue from last failing test--nf
/ --new-first
: Run tests ordered by last modified time of the file--ff
/ --failed-first
: Start with tests that failed last time--lf
/ --last-failed
: Only run tests that failed last time--pdb
: Drop into debugger on failurepytest
has many options related to configuring test output, see the documentation here
pytest
class ExampleTest(unittest.TestCase):
def test_basic(self):
a = 4
b = 4
self.assertEqual(a, b)
def test_basic():
a = 4
b = 4
assert a == b
test_*.py
, *_test.py
test_*
Test*
pytest
also supports using classespytest
class TestClass:
@classmethod
def setup_class(cls):
"""Run when class is initialized."""
cls.EXPENSIVE_GLOBAL = generate_expensive_global()
@classmethod
def teardown_class(cls):
"""Run when class is destroyed."""
cls.EXPENSIVE_GLOBAL.free_resources()
def setup_method(self, method):
"""Run before every test method execution."""
def teardown_method(self, method):
"""Run after every test method execution."""
def test_method(self):
"""This is a test that is actually run."""
assert 1 == 1
{setup,teardown}_module(module)
also available{setup,teardown}_function(func)
works with bare test functionsA unittest
user explains the inheritance hierarchy of their abstract test classes
@pytest.fixture # Decorator to make a function a fixture
def fixture_name():
do_some_setup() # This code is executed before each test that uses
# the fixture is called
yield fixture_payload # This is passed to the test function
do_some_teardown() # This code is executed after the test function
# completes
@pytest.fixture
def config_dict():
yield {"option": "value"}
def test_config(config_dict): # config_dict() executed
my_module.run_function(config=config_dict)
def test_modifying_config(config_dict): # A new dict is created here
config_dict["option"] = "value2"
my_module.run_function(config=config_dict)
def test_hard_drive_deleted(): # config_dict() not executed
my_module.delete_user_hard_drive()
assert not any(pathlib.Path("/").iterdir())
@pytest.fixture
def random_user():
username = my_module.create_random_user()
yield username
my_module.delete_user(username)
def test_func(random_user):
my_module.some_func(random_user)
def test_func_with_config(random_user, config_dict):
my_module.some_func(random_user, config=config_dict)
@pytest.fixture
def random_user():
username = my_module.create_random_user()
yield username
my_module.delete_user(username)
@pytest.fixture
def random_user_with_home(random_user, tmp_path):
home_dir = (tmp_path / random_user.username).mkdir()
random_user.set_home_dir(home_dir)
yield random_user
def test_get_home_dir(random_user_with_home):
user_homedir = random_user_with_home.get_homedir()
assert user_homedir.name == random_user_with_home.username
@pytest.fixture
def random_user_indirect(request) -> str:
username_base = request.param
user = User(username_base)
user.create_user()
yield user.username
user.delete_user()
# Pass value via indirect parameterization
@pytest.mark.parametrize("random_user_indirect", ["josé"], indirect=True)
def test_users_with_accents_indirect(random_user_indirect):
assert get_user(random_user_indirect).base == "josé"
@pytest.fixture
def username_base() -> str | None:
return None
@pytest.fixture
def random_user(username_base: str | None) -> str:
user = User(username_base)
user.create_user()
yield user.username
user.delete_user()
# Pass value via direct parameterization (note username_base != random_user)
@pytest.mark.parametrize("username_base", ["josé"])
def test_users_with_accents(random_user):
random_user.do_something()
More on direct parameterization and indirect parameterization in the pytest
documentation
@pytest.mark.parametrize("dt_str", [
"2025-01-01T01+00:00",
"2025-01-01T01:00+00:00",
"2025-01-01T01:00:00+00:00",
pytest.param("2025-01-01T01:00:00Z",
marks=pytest.mark.xfail(sys.version_info < (3, 11),
reason="Z is not supported")),
])
def test_fromisoformat(dt_str: str) -> None:
expected_datetime = datetime(2025, 1, 1, 1, tzinfo=UTC)
assert datetime.fromisoformat(dt_str) == expected_datetime
$ pytest -v ============================= test session starts ============================== collected 4 items ...::test_fromisoformat[2025-01-01T01+00:00] PASSED [ 25%] ...::test_fromisoformat[2025-01-01T01:00+00:00] PASSED [ 50%] ...::test_fromisoformat[2025-01-01T01:00:00+00:00] PASSED [ 75%] ...::test_fromisoformat[2025-01-01T01:00:00Z] XFAIL [100%] =========================== short test summary info ============================ XFAIL ...::test_fromisoformat[2025-01-01T01:00:00Z] - Z is not supported ========================= 3 passed, 1 xfailed in 0.10s =========================
unittest_parametrize
import unittest_parametrize
from unittest_parametrize import parametrize, param
class Tests(unittest_parametrize.ParametrizedTestCase):
@parametrize("dt_str", [
("2025-01-01T01+00:00",),
("2025-01-01T01:00+00:00",),
("2025-01-01T01:00:00+00:00",),
])
def test_fromisoformat(self, dt_str):
expected_datetime = datetime(2025, 1, 1, 1, tzinfo=UTC)
self.assertEqual(datetime.fromisoformat(dt_str), expected_datetime)
$ python -m unittest -v parameterize
test_fromisoformat_0 (parameterize.Tests.test_fromisoformat_0) ... ok
test_fromisoformat_1 (parameterize.Tests.test_fromisoformat_1) ... ok
test_fromisoformat_2 (parameterize.Tests.test_fromisoformat_2) ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK
@pytest.mark.parametrize("x", [4, 5, 6])
@pytest.mark.parametrize("y", [3, 2, 1])
def test_multiply(x, y):
z = x * y
assert z > x and z > y
$ pytest --tb=short ============================= test session starts ============================== collected 9 items .........FFF [100%] =================================== FAILURES =================================== ______________________________ test_multiply[1-4] ______________________________ ...:7: in test_multiply assert z > x and z > y E assert (4 > 4) ______________________________ test_multiply[1-5] ______________________________ ...:7: in test_multiply assert z > x and z > y E assert (5 > 5) ______________________________ test_multiply[1-6] ______________________________ ...:7: in test_multiply assert z > x and z > y E assert (6 > 6) =========================== short test summary info ============================ FAILED ...::test_multiply[1-4] - assert (4 > 4) FAILED ...::test_multiply[1-5] - assert (5 > 5) FAILED ...::test_multiply[1-6] - assert (6 > 6) ========================= 3 failed, 6 passed in 0.23s ==========================
pytest-randomly
: Run tests in a random orderpytest-xdist
: Run tests in parallelpytest-subtest
: Adds the subtest
fixturepytest-cov
: Orchestrates coverage with pytest
pytest-memray
: Memory profilingpytest-leaks
: Memory leak detectionpytest-benchmark
/ pytest-speed
: Speed benchmarks$ pytest --memray tests/test_easter.py ============================= test session starts ============================== Using --randomly-seed=609276657 collected 163 items tests/test_easter.py ................................................... [ 31%] ........................................................................ [ 75%] ........................................ [100%] ================================ MEMRAY REPORT ================================= Allocation results for tests/test_easter.py::test_easter_orthodox[easter_date9] at the high watermark 📦 Total memory allocated: 1.4KiB 📏 Total allocations: 2 📊 Histogram of allocation sizes: |█ | 🥇 Biggest allocating functions: - wrapper:pytest_memray/plugin.py:192 -> 844.0B - test_easter_orthodox:tests/test_easter.py:83 -> 614.0B ...
pygments
: Code highlighting in tracebackspytest-hammertime
: Turns .
into '🔨'pytest-pumpkin-spice
: Adds "pumpkin spice" flavor to your outputpytest-sugar
: Changes look and feel of pytest
$ pytest test_bad_assert.py ============================= test session starts ============================== collected 1 item test_bad_assert.py F [100%] =================================== FAILURES =================================== _______________________________ test_bad_assert ________________________________ def test_bad_assert(): a = 1 > assert a == 2, "Custom error message" E AssertionError: Custom error message E assert 1 == 2 test_bad_assert.py:3: AssertionError =========================== short test summary info ============================ FAILED test_bad_assert.py::test_bad_assert - AssertionError: Custom error message
$ pytest --pumpkin-spice pytest ================================================ test session starts ================================================ collected 33 items / 1 skipped pytest/test_bad_assert.py ❄️ [ 3%] pytest/test_bad_asserts_tuple.py 🎃 [ 6%] pytest/test_classes.py 🥧 [ 9%] pytest/test_confusing_error_message.py 🥧 [ 12%] pytest/test_error_message.py ❄️ [ 15%] pytest/test_fixture_parameters.py 🎃 🎃 🎃 [ 24%] pytest/test_floats.py ❄️ 🎃 [ 30%] pytest/test_marks.py 🎃 ❄️ [ 36%] pytest/test_parametrize_ids.py 🎃 🎃 🎃 🎃 [ 60%] pytest/test_special_assert.py ❄️ [ 63%] pytest/test_stacked_parameterization.py 🎃 🎃 🎃 🎃 🎃 🎃 ❄️ ❄️ ❄️ [ 90%] pytest/test_xfail.py 🎃 🍂 ❄️ [100%] ====================================================== ERRORS ======================================================= ... FAILED ❄️ pytest/test_xfail.py::test_xpass ERROR 🥧 pytest/test_classes.py::TestClass::test_method - NameError: name 'generate_expensive_global' is not defined
$ pytest Test session starts (platform: linux, Python 3.11.8, pytest 8.2.0, pytest-sugar 1.0.0) plugins: cov-5.0.0, pumpkin-spice-0.1.0, subtests-0.12.1, sugar-1.0.0, hypothesis-6.100.2 collected 2096 items / 1278 deselected / 818 selected tests/test_easter.py ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 5% ▌ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 11% █▏ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 16% █▋ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 20% ██ tests/test_isoparser.py ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 25% ██▌ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 30% ███ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 35% ███▌ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 39% ███▉ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 49% ████▉ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 54% █████▌ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓x✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 64% ██████▌ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 69% ██████▉ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 74% ███████▍ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 83% ████████▍ ✓✓✓✓✓✓✓✓ 89% ████████▉ tests/test_relativedelta.py ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 94% █████████▍ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 98% █████████▊ ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 100% ██████████ Results (1.32s): 817 passed 1 xfailed 1278 deselected
pytest
can be adopted incrementallypytest
and unittest
style codepytest
is the standard way to do testingpytest
is extremely feature richmarkers
, custom plugins, etc)Subtests in Python (https://blog.ganssle.io/articles/2020/04/subtests-in-python.html)
PyTexas 2022: xfail
and skip
: What to do with tests you know will fail (https://ganssle.io/talks)