mirror of
https://github.com/GothenburgBitFactory/taskwarrior.git
synced 2025-06-26 10:54:26 +02:00
158 lines
5.2 KiB
Text
158 lines
5.2 KiB
Text
README
|
|
======
|
|
|
|
This is the task.git/test/README file, and contains notes about the Taskwarrior
|
|
test suite.
|
|
|
|
|
|
Running Tests
|
|
-------------
|
|
|
|
TL;DR cd test && make && ./run_all && ./problems
|
|
|
|
All unit tests produce TAP (Test Anything Protocol) output, and are run by the
|
|
'run_all' test harness.
|
|
|
|
The 'run_all' script produces an 'all.log' file which is the accumulated output
|
|
of all tests. Before executing 'run_all' you need to compile the C++ unit
|
|
tests, by running 'make' on the 'test' directory.
|
|
|
|
The script 'problems' will list all the tests that fail, with a count of the
|
|
failing tests.
|
|
|
|
Any TAP harness may be used.
|
|
|
|
Note that adding the '--serial' option to ./run_all, all tests are executed serially.
|
|
The default runs Python and C++ tests in parallel, alongside the Perl tests
|
|
that run serially (due to isolation limitations).
|
|
Using '--serial' will make for a slower test run.
|
|
|
|
|
|
Architecture
|
|
------------
|
|
|
|
There are three varieties of tests:
|
|
|
|
* Perl unit tests that use Test::More and the JSON module. We are phasing
|
|
these out, and will accept no new Perl tests. These tests are high level
|
|
and exercise Taskwarrior at the command line level.
|
|
|
|
* C++ unit tests that test low-level object interfaces. These are typically
|
|
very fast tests, and are exhaustive in nature.
|
|
|
|
* Python unit tests that are at the highest level, exercising the command
|
|
line, hooks and syncing. There is an example, 'template.t', that shows how
|
|
to perform various high level tests.
|
|
|
|
All tests are named with the pattern '*.t', and any other forms are not run by
|
|
the test harness. Additionally a test must be set executable (chmod +x) for it
|
|
to be run. In the case of Perl and Python tests one can still run them manually
|
|
by launching them with 'perl/python test.t'. It also allows us to keep tests
|
|
submitted for bugs that are not scheduled to be fixed in the upcoming release,
|
|
and we don't want the failing tests to prevent us from seeing 100% pass rate
|
|
for the bugs we *have* fixed.
|
|
|
|
|
|
Goals
|
|
-----
|
|
|
|
The test suite is evolving, and becoming a better tool for determining whether
|
|
code is ready for release. There are goals that shape these changes, and they
|
|
are:
|
|
|
|
* Migrate test suite to Python and C++, eliminating all Perl. The Python
|
|
test suite is more expressive and high level. Migrating reduces
|
|
dependencies.
|
|
|
|
* Increase test coverage by testing more features, more thoroughly.
|
|
|
|
* Write fewer bug regression tests. Over time, bug regression tests are less
|
|
useful than feature tests, and more likely to contain overlapping coverage.
|
|
|
|
* The Python test suite provides test isolation, such that each test is run
|
|
in a separate directory. This allows parallelization, which will improve
|
|
as the Perl tests are eliminated.
|
|
|
|
* Eliminate obsolete tests, which are tests that have overlapping coverage.
|
|
This means migrate bug-specific tests to feature tests.
|
|
|
|
* Categorize the tests, restructure the directories.
|
|
|
|
|
|
What Makes a Good Test
|
|
----------------------
|
|
|
|
A good test ensures that a feature is functioning as expected, and contains
|
|
both positive and negative aspects, or in other words looks for expected
|
|
behavior as well as looking for the absence of unexpected behavior.
|
|
|
|
|
|
Conventions for writing a test
|
|
------------------------------
|
|
|
|
If you wish to contribute tests, please consider the following guidelines:
|
|
|
|
* Tests created after bugs or feature requests should (ideally) have an entry
|
|
on https://bug.tasktools.org/ and should include the issue ID in a
|
|
docstring or comment.
|
|
|
|
* Tests should be added to the file that best matches the "thing" being
|
|
tested. For instance, a test on filters should live in filter.t
|
|
|
|
* Class and method names should be descriptive of what they are testing.
|
|
Example: TestFilterOnReports
|
|
|
|
* Docstrings on Python tests are mandatory. The first line is used as title
|
|
of the test.
|
|
|
|
* Extra information and details should go into multi-line docstrings or
|
|
comments.
|
|
|
|
* Python tests for bugs or features not yet fixed/implemented should be
|
|
decorated with: @unittest.skip("WaitingFor TW-xxxx")
|
|
|
|
|
|
How to Submit a Test Change/Addition
|
|
------------------------------------
|
|
|
|
Mail it to support@taskwarrior.org, or attach it to an open bug.
|
|
|
|
|
|
TODO
|
|
----
|
|
|
|
For anyone looking for test-related tasks to take on, here are some suggestions:
|
|
|
|
* Take tw-285.t and improve it to test more (if not all) virtual tags, then
|
|
rename it virtual-tags.t.
|
|
|
|
* Select a bug.*.t Perl test and convert it to Python using the template.
|
|
|
|
* Look at the latest todo.txt file format spec, and make sure that
|
|
import.todo.sh.t is thoroughly testing the current format.
|
|
|
|
* Select a feature.*.t Perl test, convert it to Python using the template,
|
|
then rename it to <feature>.t
|
|
|
|
* Find and eliminate individuals test that do the same thing.
|
|
|
|
* Using /pattern/ with rc.regex:off still uses regex.
|
|
|
|
* Import JSON validates absolutely no attribute. Create tests with bad data
|
|
in all fields.
|
|
|
|
* DOM references are not validated, make this painfully obvious with tests
|
|
|
|
* Crazy dateformat values are not tested
|
|
|
|
* All add-on scripts should work
|
|
|
|
* Invalid UTF8 is not tested
|
|
|
|
* read-only data files are not tested
|
|
|
|
* all the attribute modifiers need to be tested, only a few are
|
|
|
|
* aliases are not well tested, and fragile
|
|
|
|
---
|