Make sure that you install:
!pip install pytest pytest-regtest pytest-cov mock
# create fresh folders
!rm -rf tests
!rm -rf project
!mkdir -p tests
!mkdir -p project
# folders are python packages
!touch tests/__init__.py
!touch project/__init__.py
# to support "import project" from within the "tests" folder we manipulate the
# lookup path list. the correct way to this is to setup a propper python package, see
# https://siscourses.ethz.ch/python_packaging_hands_on/hands_on_python_packaging.html#1
import sys, os
sys.path.append(os.path.abspath("project"))
import project
%%file project/algorithm.py
def add(a, b):
return a + b
def sub(a, b):
return a - b
%%file tests/test_simple.py
from project.algorithm import add
def test_one():
assert add(1, 1) == 2
def test_two():
""" this test will fail !"""
a = [1, 2, 3]
b = [3, 4]
assert add(a, b) == [1, 2, 3, 4]
The flag -v
lists every test function on a separate line:
!py.test -v tests
Comment: You see that the red lines provide a lot of context information why this check failed
Lets fix our test
%%file tests/test_simple.py
from project.algorithm import add
def test_one():
assert add(1, 1) == 2
def test_two():
""" this test will fail !"""
a = [1, 2, 3]
b = [3, 4]
assert add(a, b) == [1, 2, 3, 3, 4]
!pytest -v tests
For implementing test fixtures pytest offers a different approach to "classic" setup and tear-down methods based on function arguments.
Here we use a fixture tmpdir
which is part of pytest:
%%file tests/test_fixtures.py
def test_files(tmpdir):
print()
print()
print("tmpdir is", type(tmpdir))
print("temp folder for this test is", tmpdir.strpath)
print()
The "-s" flag shows the output from the print statements:
!py.test -vs tests
We now create our own fixture to mimic setup / teardown
%%file tests/test_fixtures.py
import pytest
import os
@pytest.fixture
def temp_text_file(tmpdir):
# setup
path = tmpdir.join("test_text_file.txt").strpath
with open(path, "w") as fh:
print("line 1", file=fh)
print("line 2", file=fh)
# pass it to the test function
yield path
# now test function is done, do some cleanup:
os.remove(path)
def test_files(temp_text_file):
lines = open(temp_text_file, "r").readlines()
assert lines == ["line 1\n", "line 2\n"]
!py.test -v tests/
Regression tests do not test for correct results but checks if known and acknowledged results change.
The pytest-regtest
plug-in for pytest
supports this by offering a regtest
fixture (it is not a fixture in its strict sense, but is used the same way as pytest
implements fixtures). This "fixture" works like a file handle:
%%file tests/test_for_regressions.py
from project.algorithm import add
def test_1(regtest):
result = add("12345", "6789")
print(result, file=regtest)
result = add([1, 2, 3], [4, 5, 6])
print(result, file=regtest)
The first time we run the test it will fail, because implemented now how to record the results, but did not approve results yet:
!pytest -v tests
We check this output (lines starting with -
). And if we regard it as correct we approve this as follows:
!pytest -v --regtest-reset tests/
And if we run the tests again everything is fine now:
!pytest -v tests/
Lets break the regression test by modifying the test file:
%%file tests/test_for_regressions.py
from project.algorithm import add
def test_1(regtest):
result = add("12345", "678910")
print(result, file=regtest)
result = add([1, 2, 3], [4, 5, 6])
print(result, file=regtest)
!pytest -v tests/
The line starting with -
tells the current result, the line with +
is the expected result
The recorded outputs of the regtest folders are in the folder tests/_regtest_output
, so don't forget to add them to your version control system.
!pytest -v --regtest-reset tests/
%%file tests/test_with_mock.py
import os
import mock
def test_with_mock():
with mock.patch("os.getcwd") as my_mock:
my_mock.return_value = "here"
assert os.getcwd() == "here"
!pytest -v tests
!py.test --cov project --cov-report html --cov-report term-missing tests/
# This also wrote a nice HTML report to the `htmlcov` folder:
!ls -l htmlcov/index.html