With increasing complexity of the data structures and business logic of our main development project at work we are slowly arriving at a point where the current way of testing is simply no longer tenable: The whole test suite with close to 2000 tests takes about 3 hours to run now. Bad habits are starting to creep in – such as committing your changes without running a full test suite on your local machine and relying on your build system to tell you what you have broken with your last commit. Yes, really bad habits.
There are a number of approaches to improve this desolate situation (cf. also this blog post):
- Speed up data access by staging all testing data in an in-memory database prior to the test run;
- Introduce test tiers, with the simple (and, presumably, faster) model level tests always enabled while the more complex functional tests only run over night;
- Avoid repeated test object construction through careful analysis of which test objects can be reused for which tests.
Ultimately, the solution will probably require a combination of all approaches listed above, but I decided to start with the latter one and to review our current testing framework along the way.
Our group is currently using a “classical” unittest.TestCase
testing infrastructure with nose
as the test driver. Carefully crafted set_up
and tear_down
methods in our test base classes ensure that the framework for our application (a REST application based on everest) is initialized and shut down properly for each test. Data shared between tests are kept in the instance namespace or in the class namespace of the test class itself or any of its super classes.
After an in depth review of our complex hierarchy of test classes I realized that it would be difficult to implement the desired flexibility in reusing test objects across tests because the unittest framework offers very limited facilities to separate the creation of test objects from the tests themselves. Looking for alternative frameworks, I quickly came across pytest, which promises to be more modular through rigorous use of dependency injection, i.e., passing each test function exactly the test objects (or “fixtures”) it needs to perform the test. I decided to give pytest
a shot and the remainder of this post is about reporting on the experiences I made over the course of this experiment.
For easier porting of the existing tests, I started out replicating the functionality of the current testing base classes derived from unittest.TestCase
with pytest
fixtures. As it turned out, this made it also easy to understand the different philosophies behind these two testing environments. For example, the base class for tests requiring a Pyramid Configurator
looked like this in the unittest
framework:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
class TestCaseWithConfiguration(TestCaseWithIni): """ Base class for test cases that access an initialized (but not configured) registry. :ivar config: The registry configurator. This is set in the set_up method. """ #: The name of a package containing a configuration file to load. #: Defaults to `None` indicating that no configuration applies. package_name = None # : The name of a ZCML configuration file to use. config_file_name = 'configure.zcml' # : The section name in the ini file to look for settings. Override as # : needed in derived classes. ini_section_name = None def set_up(self): super(BaseTestCaseWithConfiguration, self).set_up() # Create and configure a new testing registry. reg = Registry('testing') self.config = Configurator(registry=reg, package=self.package_name) if not self.ini_section_name is None: settings = self.ini.get_settings(self.ini_section_name) else: try: settings = self.ini.get_settings('DEFAULT') except configparser.NoSectionError: settings = None self.config.setup_registry(settings=settings) self.config.begin() if not self.package_name is None: if not settings is None: cfg_zcml = settings.get('configure_zcml', self.config_file_name) else: cfg_zcml = self.config_file_name self.config.load_zcml(cfg_zcml) def tear_down(self): super(BaseTestCaseWithConfiguration, self).tear_down() tear_down_registry(self.config.registry) self.config.end() |
BaseTestCaseWithConfiguration
inherits from TestCaseWithIni
which provides ini file parsing functionality. The Pyramid Configurator
instance is created in the set_up
method using parameters that are defined in the class namespace and stored in the test case instance namespace. To avoid cross talk between tests, the tear_down
method deconstructs the configurator’s registry.
This test base class is used along the lines of the following contrived example:
1 2 3 4 5 6 7 |
from everest.testing import TestCaseWithConfiguration class FooTestCase(TestCaseWithConfiguration): package_name = 'foopackage.tests.fooapp' def test_foo(self): self.assert_not_none(self.config) |
Now, the equivalent pytest
fixture and test module look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
@fixture(scope='class') def configurator(request, ini): # redefining ini pylint: disable=W0621 """ Fixture for all tests that set up a Pyramid configurator. """ ini_section_name = getattr(request.cls, 'ini_section_name', None) pkg_name = getattr(request.cls, 'package_name', None) if not pkg_name is None: def_cfg_zcml = getattr(request.cls, 'config_file_name', 'configure.zcml') else: def_cfg_zcml = None if not ini_section_name is None: settings = ini.get_settings(ini_section_name) else: try: settings = ini.get_settings('DEFAULT') except configparser.NoSectionError: settings = None if not settings is None: cfg_zcml = settings.get('configure_zcml', def_cfg_zcml) else: cfg_zcml = def_cfg_zcml reg = Registry('testing') conf = Configurator(registry=reg, package=package_name) conf.setup_registry(settings=settings) conf.begin() if not package_name is None: conf.load_zcml(cfg_zcml) def tear_down(): tear_down_registry(reg) conf.end() request.addfinalizer(tear_down) |
1 2 3 4 5 |
class TestFoo: package_name = 'foopackage.tests.fooapp' def test_foo(self, configurator): assert not is None configurator |
While the mechanics of the pytest
fixture has not changed very much (and is perhaps a tad harder to read because of the inline tear_down
function), the test module has gotten a lot simpler: There is no need to derive from a base class and the properly initialized Configurator
test object is passed automatically to the test function by the pytest
framework. Moreover, while the pytest
fixtures can depend on each other (in the example, the configurator
fixture depends on the ini
fixture which again is passed in automatically by pytest
), they are much more modular than the TestCase
classes and you can pull in whichever fixtures you need in a given test function.
Since pytest
comes with unit test integration, most of our old tests ran right out of the box. However, there was no support for the __test__
attribute that nose offers to manually exclude base classes from testing; also, the unittest plugin of pytest
does not automatically exclude classes with names starting with an underscore from testing like nose
does. Fortunately, fixing these problems was trivial using one of the many, many pytest
hooks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
def pytest_collection_modifyitems(session, config, items): # pylint: disable=W0613 """ Called by pytest after all tests have been collected. For compatibility with the existing test suite, we remove all tests that were collected from abstract base test classes or from classes that have a `__test__` attribute set to `False` (nose feature). """ removed = 0 for idx, item in enumerate(items[:]): cls = item.parent.cls if not cls is None and issubclass(cls, TestCase) \ and (cls.__name__.startswith('_') or getattr(cls, '__test__', None) is False): items.pop(idx - removed) removed += 1 |
To make the basic test fixtures for everest
applications usable from other projects, I bundled them with the test collection hook above as a pytest
plugin which I published using the setuptools
entry point mechanics alongside the old nose plugin entry point in the everest.setup
module like this:
28 29 30 31 32 33 |
entry_points="""\ [nose.plugins.0.10] everest = everest.ini:EverestNosePlugin [pytest11] everest = everest.tests.fixtures """ |
Next, I needed to add support for the custom app-ini-file
option that is used to pass configuration options, particularly the configuration of the logging system, to everest applications. This was also straightforward using the pytest_addoption
and pytest_configure
hooks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
def pytest_addoption(parser): """ This adds the `--app-ini-file` option for configuring test runs with an `ini` file. Just like when configuring pyramid with an ini file, you can not only define WSGI application settings in your ini file, but also set up the logging system if you supply a "loggers" section. """ parser.addoption("--app-ini-file", action="store", default=None, help="everest application ini file.") def pytest_configure(config): """ Called by pytest after all options have been collected and all plugins have been initialized. This sets up the logging system from the "loggers" section in your application ini file, if configured. """ app_ini_file = config.getoption('--app-ini-file') if not app_ini_file is None: setup_logging(app_ini_file) |
If you now configure a console handler for your logger that uses a logging.StreamHandler
to direct output to sys.stderr
in an ini file and then pass this to the pytest
driver with the app-ini-file
option, the logging output from failed tests will be reported by pytest
in the “Captured stderr” output section.
Finally, I needed to get code coverage working with pytest
. The simplest way to achieve this seemed the pytest-cov
plugin for pytest
. However, I could not get correct coverage results for the everest
sources when using this plugin, presumably because quite a few of the everest
modules are loaded when the everest
plugin for pytest
is initialized, so I decided to run coverage
separately from the command line which is not much of an inconvenience and perhaps the right thing to do anyways.
With this setup, we can now use the pytest
test driver to run the existing test suite for our REST application while we gradually replace our complex test case class hierarchy with more modular pytest
test fixtures that will ultimately help us cut down our test run time.
Of course, every piece of magic has its downsides. If your IDE pampers you with a feature to transport you to the definition of an identifier at the push of a button (F3 in pydev, for instance), then you might find it disconcerting that this will not work with the test fixtures that are magically passed by pytest
. However, in most cases it should be very straightforward to find the definition of a particular fixture, since there are only a couple of places where you would sensibly put it (inside the test module if it is not shared across modules or in a confest.py
module if it is).
In summary, I think that the benefits of using pytest
far outweigh its downsides and I encourage everyone to give it a try.