Running stress_test without arguments runs all of the test-suite 100
times (serially) and stops on first failure.
Passing test names as arguments (stress_test bug.123.t ...) will run
only these.
If 100 is found to be too small or too large pass --repeat <integer>.
Run stress_test -h for more information
README
======
This is the task.git/test/README file, and contains notes about the Taskwarrior
test suite.
Running Tests
-------------
TL;DR cd test && make && ./run_all && ./problems
All unit tests produce TAP (Test Anything Protocol) output, and are run by the
'run_all' test harness.
The 'run_all' script produces an 'all.log' file which is the accumulated output
of all tests. Before executing 'run_all' you need to compile the C++ unit
tests, by running 'make' on the 'test' directory.
The script 'problems' will list all the tests that fail, with a count of the
failing tests.
Any TAP harness may be used.
Note that adding the '--serial' option to ./run_all, all tests are executed serially.
The default runs Python and C++ tests in parallel, alongside the Perl tests
that run serially (due to isolation limitations).
Using '--serial' will make for a slower test run.
Architecture
------------
There are three varieties of tests:
* Perl unit tests that use Test::More and the JSON module. We are phasing
these out, and will accept no new Perl tests. These tests are high level
and exercise Taskwarrior at the command line level.
* C++ unit tests that test low-level object interfaces. These are typically
very fast tests, and are exhaustive in nature.
* Python unit tests that are at the highest level, exercising the command
line, hooks and syncing. There is an example, 'template.t', that shows how
to perform various high level tests.
All tests are named with the pattern '*.t', and any other forms are not run by
the test harness. Additionally a test must be set executable (chmod +x) for it
to be run. In the case of Perl and Python tests one can still run them manually
by launching them with 'perl/python test.t'. It also allows us to keep tests
submitted for bugs that are not scheduled to be fixed in the upcoming release,
and we don't want the failing tests to prevent us from seeing 100% pass rate
for the bugs we *have* fixed.
Goals
-----
The test suite is evolving, and becoming a better tool for determining whether
code is ready for release. There are goals that shape these changes, and they
are:
* Migrate test suite to Python and C++, eliminating all Perl. The Python
test suite is more expressive and high level. Migrating reduces
dependencies.
* Increase test coverage by testing more features, more thoroughly.
* Write fewer bug regression tests. Over time, bug regression tests are less
useful than feature tests, and more likely to contain overlapping coverage.
* The Python test suite provides test isolation, such that each test is run
in a separate directory. This allows parallelization, which will improve
as the Perl tests are eliminated.
* Eliminate obsolete tests, which are tests that have overlapping coverage.
This means migrate bug-specific tests to feature tests.
* Categorize the tests, restructure the directories.
What Makes a Good Test
----------------------
A good test ensures that a feature is functioning as expected, and contains
both positive and negative aspects, or in other words looks for expected
behavior as well as looking for the absence of unexpected behavior.
Conventions for writing a test
------------------------------
If you wish to contribute tests, please consider the following guidelines:
* Tests created after bugs or feature requests should (ideally) have an entry
on https://bug.tasktools.org/ and should include the issue ID in a
docstring or comment.
* Tests should be added to the file that best matches the "thing" being
tested. For instance, a test on filters should live in filter.t
* Class and method names should be descriptive of what they are testing.
Example: TestFilterOnReports
* Docstrings on Python tests are mandatory. The first line is used as title
of the test.
* Extra information and details should go into multi-line docstrings or
comments.
* Python tests for bugs or features not yet fixed/implemented should be
decorated with: @unittest.skip("WaitingFor TW-xxxx")
How to Submit a Test Change/Addition
------------------------------------
Mail it to support@taskwarrior.org, or attach it to an open bug.
TODO
----
For anyone looking for test-related tasks to take on, here are some suggestions:
* Take tw-285.t and improve it to test more (if not all) virtual tags, then
rename it virtual-tags.t.
* Select a bug.*.t Perl test and convert it to Python using the template.
* Look at the latest todo.txt file format spec, and make sure that
import.todo.sh.t is thoroughly testing the current format.
* Select a feature.*.t Perl test, convert it to Python using the template,
then rename it to <feature>.t
* Find and eliminate individuals test that do the same thing.
---