📋 Cheat Sheets

pytest Cheat Sheet — Write Better Python Tests


Click any item to expand the explanation and examples.

🚀 Running Tests

pytest — run tests cli
# Run all tests
pytest

Run specific file

pytest tests/test_users.py

Run specific test

pytest tests/test_users.py::test_create_user

Run specific class

pytest tests/test_users.py::TestUserModel

Run tests matching a keyword

pytest -k “login” pytest -k “login and not admin”

Verbose output

pytest -v

Stop on first failure

pytest -x

Show print() output

pytest -s

Run last failed tests only

pytest —lf

Run failed first, then rest

pytest —ff

Test discovery rules basics
pytest automatically finds tests that match these patterns:
# Files: test_*.py or *_test.py
test_users.py     ✅
users_test.py     ✅
users.py          ❌

Functions: test_*

def test_login(): ✅ def login_test(): ❌

Classes: Test*

class TestUser: ✅ (no init method!) class UserTest: ❌

Directory structure

tests/ test_users.py test_auth.py conftest.py # Shared fixtures

✅ Assertions

assert — plain Python assertions assert
pytest uses plain assert statements — no special methods needed.
def test_basics():
    assert 1 + 1 == 2
    assert "hello" in "hello world"
    assert len([1, 2, 3]) == 3
    assert user.is_active
    assert not user.is_banned
    assert result is None
    assert result is not None
    assert isinstance(result, dict)

Approximate comparison (floats)

assert 0.1 + 0.2 == pytest.approx(0.3)

Compare dicts/lists (pytest shows nice diffs)

assert actual == {"name": "Alice", "role": "admin"}</pre>

pytest rewrites assert to show detailed failure messages automatically.

pytest.raises — test exceptions assert
import pytest

def test_division_by_zero(): with pytest.raises(ZeroDivisionError): 1 / 0

def test_error_message(): with pytest.raises(ValueError, match=“invalid literal”): int(“not_a_number”)

def test_custom_exception(): with pytest.raises(PermissionError) as exc_info: delete_admin_user() assert “not allowed” in str(exc_info.value)

🔧 Fixtures

@pytest.fixture fixture
Fixtures provide test data and setup/teardown.
import pytest

@pytest.fixture def user(): return {“name”: “Alice”, “email”: “alice@example.com”}

def test_user_name(user): assert user[“name”] == “Alice”

Fixture with setup AND teardown

@pytest.fixture def db_connection(): conn = create_connection() yield conn # Test runs here conn.close() # Teardown after test

Fixture scopes

@pytest.fixture(scope=“function”) # Default: new for each test @pytest.fixture(scope=“class”) # Once per test class @pytest.fixture(scope=“module”) # Once per file @pytest.fixture(scope=“session”) # Once per entire test run

Auto-use (applies to all tests automatically)

@pytest.fixture(autouse=True) def reset_db(): db.reset() yield db.reset()

conftest.py — shared fixtures fixture
# tests/conftest.py — available to ALL tests in this directory
import pytest

@pytest.fixture def api_client(): from myapp import create_app app = create_app(testing=True) return app.test_client()

@pytest.fixture def auth_headers(): return {“Authorization”: “Bearer test-token”}

tests/test_api.py — just use the fixture name as parameter

def test_get_users(api_client, auth_headers): response = api_client.get(“/users”, headers=auth_headers) assert response.status_code == 200

No imports needed — pytest discovers conftest.py automatically.

tmp_path — built-in temp directory fixture
# tmp_path is a built-in fixture — no setup needed
def test_write_file(tmp_path):
    file = tmp_path / "test.txt"
    file.write_text("hello")
    assert file.read_text() == "hello"

tmp_path_factory for session-scoped temp dirs

@pytest.fixture(scope=“session”) def data_dir(tmp_path_factory): return tmp_path_factory.mktemp(“data”)

🔄 Parametrize

@pytest.mark.parametrize parametrize
Run the same test with different inputs.
import pytest

@pytest.mark.parametrize(“input,expected”, [ (“hello”, 5), ("", 0), (“world”, 5), ]) def test_string_length(input, expected): assert len(input) == expected

Multiple parameters

@pytest.mark.parametrize(“a,b,result”, [ (1, 2, 3), (0, 0, 0), (-1, 1, 0), ]) def test_add(a, b, result): assert a + b == result

With IDs for readable output

@pytest.mark.parametrize(“email,valid”, [ (“user@example.com”, True), (“invalid”, False), ("", False), ], ids=[“valid_email”, “no_at_sign”, “empty”]) def test_validate_email(email, valid): assert is_valid_email(email) == valid

🏷️ Markers

@pytest.mark — tag and filter tests markers
# Mark tests
@pytest.mark.slow
def test_full_pipeline():
    ...

@pytest.mark.integration def test_database_connection(): …

Skip

@pytest.mark.skip(reason=“Not implemented yet”) def test_future_feature(): …

Skip conditionally

@pytest.mark.skipif(sys.platform == “win32”, reason=“Unix only”) def test_unix_permissions(): …

Expected failure

@pytest.mark.xfail(reason=“Known bug #123”) def test_known_bug(): …

Run by marker

pytest -m slow

pytest -m “not slow”

pytest -m “integration and not slow”

Register markers in pyproject.toml to avoid warnings

[tool.pytest.ini_options]

markers = [

“slow: marks tests as slow”,

“integration: integration tests”,

]

🎭 Mocking

monkeypatch and unittest.mock mock
# monkeypatch (built-in fixture)
def test_with_env_var(monkeypatch):
    monkeypatch.setenv("API_KEY", "test-key")
    assert os.environ["API_KEY"] == "test-key"

def test_mock_function(monkeypatch): monkeypatch.setattr(“myapp.service.send_email”, lambda *a: None) result = create_user(“alice@example.com”) # Won’t actually send email assert result.email == “alice@example.com

unittest.mock (more powerful)

from unittest.mock import patch, MagicMock

@patch(“myapp.service.requests.get”) def test_api_call(mock_get): mock_get.return_value.json.return_value = {“name”: “Alice”} mock_get.return_value.status_code = 200

result = fetch_user(1)
assert result["name"] == "Alice"
mock_get.assert_called_once_with("https://api.example.com/users/1")

Mock as context manager

def test_with_mock(): with patch(“myapp.db.save”) as mock_save: mock_save.return_value = True assert create_user(“alice”) is True

⚙️ Configuration

pyproject.toml / pytest.ini config
# pyproject.toml (recommended)
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_functions = ["test_*"]
addopts = "-v --tb=short"
markers = [
    "slow: marks tests as slow",
    "integration: integration tests",
]

Useful addopts

addopts = """ -v —tb=short —strict-markers -x —cov=myapp —cov-report=term-missing """

Coverage config
# Install
pip install pytest-cov

Run with coverage

pytest —cov=myapp pytest —cov=myapp —cov-report=html # HTML report pytest —cov=myapp —cov-report=term-missing # Show missing lines

.coveragerc or pyproject.toml

[tool.coverage.run] source = [“myapp”] omit = [“tests/”, “/migrations/*”]

[tool.coverage.report] fail_under = 80