[tmt] Ideas for test fixtures support

Hello tmt users,

There have been various requests to supporting some test-fixture like capabilities in tmt, basically the ability to run a test/script right before and/or after a test with side-effects to be discussed, and we would like to move forward with some of these ideas, and we would like to get some opinions from the users on which designs are more desired so that we can prioritize the support and implementations for those. Most of these are not mutually exclusive, and we may implement multiple formats if there is enough desire for those. Without further ado, here are the proposed designs we have so far:

Status quo

We already have some fixture-like support for the tests in the form of prepare and finish
steps.

  • pytest equivalent: N/A
  • Behaves like a fixture for all tests, i.e. if the prepare step fails, all tests are not run
  • Currently Testing Farm does not display nicely when one of these steps fail, but there are plans to improve that reporting
  • propose: Support equivalent test frameworks in the prepare/finish steps, e.g. executing beakerlib tests as prepare tasks

Example:

/plan:
  prepare:
    how: shell
    script: mkdir $TMT_PLAN_DATA/workdir
  finish:
    how: shell
    script: rm -rf $TMT_PLAN_DATA/workdir
/test:
  test: cd $TMT_PLAN_DATA/workdir

(New) Tests as test fixture

Some projects already use other tests as prepare steps for another. The plan here for formalizing this is to formalize what is the expected relation and behavior between the tests:

  • pytest equivalent: session scope
  • The test fixtures in this case are run only once for the whole plan even if they are used for multiple tests, and their effects are relatively contained within the test fixtured area
  • The fixtures have the full metadata and capability of a test such as specific framework, tags, requires, etc.
  • propose: Filters such as tmt run tests ... would automatically pick up the setup/cleanup test fixture that it requires (recursively if those tests have other fixtures it needs) and guarantee the ordering between the tests and their fixtures
  • propose: If a setup test fails or is skipped, the tests associated with it are not run, and instead the test results are marked as skipped (or maybe failed?)
  • propose: The ordering of the setup/cleanup fixtures will be relatively loose (other tests may run in between these fixtures that are not requesting those fixtures) just to keep the implementation simple
  • propose: The scope could be configurable if the tests are meant to be re-run for each test that requires it

Example:

/test:
  path: /
  /setup:
    test: mkdir workdir
  /actual:
    fixture:
      setup: /test/setup
      cleanup: /test/cleanup
    test: cd workdir
  /cleanup:
    test: rm -rf workdir

(New) Checks as test fixture

Another type of test fixture that we are asked about are those that are run every time a test
requests it

  • pytest equivalent: function scope
  • The test fixtures in this case are run exactly before and after a test even if other tests
    reuse the same definition
  • propose: The entries of the check must be another test entry

Example:

/test:
  path: /
  /setup:
    test: mkdir workdir
  /actual:
    check:
      how: fixture
      setup: /test/setup
      cleanup: /test/cleanup
    test: cd workdir
  /cleanup:
    test: rm -rf workdir

(New) Tests as requires for other tests

This is basically the same as Tests as test fixture and/or Checks as test fixture, but
with another interface.

  • pytest equivalent: function or session scope

Example:

/test:
  path: /
  /setup:
    test: mkdir workdir
  /actual:
    requires:
      - test: /test/setup
        type: setup
      - test: /test/cleanup
        type: cleanup
    test: cd workdir
  /cleanup:
    test: rm -rf workdir

Out of scope ideas

Some ideas that would be out of scope for us:

  • Reusable fixtures like in pytest.
    This would complicate the fmf file and the tmt code too much. Instead fmf inheritance and yaml anchors should ease the need for this.
1 Like