Documentation: Updated for 2.4.5/2.5.0

This commit is contained in:
Paul Beckingham 2015-07-24 18:49:21 -04:00
parent 564a84d603
commit 61a9eec512
3 changed files with 25 additions and 75 deletions

View file

@ -10,7 +10,7 @@ How to Build Taskwarrior
Obtain and build code:
$ git clone https://git.tasktools.org/scm/tm/task.git task.git
$ cd task.git
$ git checkout 2.4.4 # Latest dev branch
$ git checkout 2.4.5 # Latest dev branch
$ cmake -DCMAKE_BUILD_TYPE=debug . # debug or release. Default: neither.
$ make VERBOSE=1 # Shows details
@ -18,7 +18,6 @@ How to Build Taskwarrior
$ cd tests
$ make VERBOSE=1 # Shows details
$ ./run_all # Runs all tests silently > all.log
# Install 'vramsteg' for blinkenlights
$ ./problems # Enumerate test failures in all.log
Note that any development should be performed using a git clone, and the
@ -72,8 +71,7 @@ General Statement
all, because they not only improve the quality of the code, but prevent
future regressions, therefore maintaining quality of subsequent releases.
Plus, broken tests are a great motivator for us to fix the causal defect.
You'll need Python skills, as we are migrating from Perl to Python for our
test suite.
You'll need Python skills.
- Add a feature. Well, let's be very clear about this: adding a feature is
not usually well-received, and if you add a feature and send a patch, it
@ -102,21 +100,9 @@ General Statement
Next are some specific areas that need attention.
Deprecated Code
This is code that is going to be phased out soon, and therefore is not worth
fixing or documenting. Don't waste your time.
- Nag feature.
- Attribute modifiers.
New Code Needs
This is code that needs to be written.
- Need export_viz.* script. Any language. This would have value as an example
or template script serving as a starting-point for anyone who needed this
format.
- Need new export_xxx.* scripts - the more the better. Any language.
- Need an external script that can locate and correct duplicate UUIDs in the
data file, as found by 'task diag'. This should check to see if there is
a suitable UUID generator installed. This should also be careful to
@ -148,28 +134,17 @@ Unit Tests Needed
these kind of tests be extensive and thorough, because the software depends
on this code the most.
The tests are mainly written in Perl, and all use TAP. We are replacing these
with Python equivalents, so we are now only accepting new tests that use the
Python-test framework.
The tests are written in Python, and all use TAP.
Tests needed:
- Take a look at the bug database (https://bug.tasktools.org) and notice that
many issues, open and closed, have the "needsTest" label. These are things
that we would like to see in the test suite, as regression tests.
- The basic.t unit tests are a misnomer, and should be either removed or
renamed. We have long talked of 'basic functionality' that includes add,
delete, done, and list commands. We need unit tests that prove that basic
functionality is working, and the file containing them should be called
basic.t.
- Test propagation of modifications to recurring tasks.
- Test regex support.
- Need unit tests for each bug in the issue list that is marked with the
'needsTest' label.
Note that running the unit tests currently requires the Perl JSON module to
be installed. This will change soon.
Note that all new unit tests should follow the test/template.t standard.
Work in Progress
@ -190,4 +165,4 @@ Current Codebase Condition
---
2015-07-11 Updated for 2.4.5
2015-07-24 Updated for 2.4.5

View file

@ -24,7 +24,7 @@ Command Line Parsing
determines whether subsequent arguments are interpreted as part of a filter or
set of modifications.
The CLI object is fed command line arguments, then through a succession of
The CLI2 object is fed command line arguments, then through a succession of
calls builds and annotates a parse tree. To help with this, the Lexer is
used to break up strings into tokens.

View file

@ -15,17 +15,16 @@ All unit tests produce TAP (Test Anything Protocol) output, and are run by the
The 'run_all' script produces an 'all.log' file which is the accumulated output
of all tests. Before executing 'run_all' you need to compile the C++ unit
tests, by running 'make' on the 'test' directory.
tests, by running 'make' in the 'test' directory.
The script 'problems' will list all the tests that fail, with a count of the
failing tests.
failing tests, once you have run all the tests and produced an 'all.log' file.
Any TAP harness may be used.
Note that adding the '--serial' option to ./run_all, all tests are executed serially.
The default runs Python and C++ tests in parallel, alongside the Perl tests
that run serially (due to isolation limitations).
Using '--serial' will make for a slower test run.
The default runs Python, C++ and Bash tests in parallel. Using '--serial' will
make for a slower test run.
Architecture
@ -33,10 +32,6 @@ Architecture
There are three varieties of tests:
* Perl unit tests that use Test::More module. We are phasing these out, and
will accept no new Perl tests. These tests are high level and exercise
Taskwarrior at the command line level.
* C++ unit tests that test low-level object interfaces. These are typically
very fast tests, and are exhaustive in nature.
@ -44,13 +39,16 @@ There are three varieties of tests:
line, hooks and syncing. There is an example, 'template.t', that shows how
to perform various high level tests.
* Bash unit tests, one test per file, using the bash_tap_tw.sh script. These
tests are small, quick tests, not intended to be permanent.
All tests are named with the pattern '*.t', and any other forms are not run by
the test harness. Additionally a test must be set executable (chmod +x) for it
to be run. In the case of Perl and Python tests one can still run them manually
by launching them with 'perl/python test.t'. It also allows us to keep tests
submitted for bugs that are not scheduled to be fixed in the upcoming release,
and we don't want the failing tests to prevent us from seeing 100% pass rate
for the bugs we *have* fixed.
to be run. In the case of Python tests one can still run them manually by
launching them with 'python test.t' or simply './test.t'. It also allows us to
keep tests submitted for bugs that are not scheduled to be fixed in the
upcoming release, and we don't want the failing tests to prevent us from seeing
100% pass rate for the bugs we *have* fixed.
Goals
@ -60,24 +58,15 @@ The test suite is evolving, and becoming a better tool for determining whether
code is ready for release. There are goals that shape these changes, and they
are:
* Migrate test suite to Python and C++, eliminating all Perl. The Python
test suite is more expressive and high level. Migrating reduces
dependencies.
* Increase test coverage by testing more features, more thoroughly.
* Increase test coverage by testing more features, more thoroughly. The test
coverage level is (as of 2015-07-24) at 86.5%.
* Write fewer bug regression tests. Over time, bug regression tests are less
useful than feature tests, and more likely to contain overlapping coverage.
* The Python test suite provides test isolation, such that each test is run
in a separate directory. This allows parallelization, which will improve
as the Perl tests are eliminated.
* Eliminate obsolete tests, which are tests that have overlapping coverage.
This means migrate bug-specific tests to feature tests.
* Categorize the tests, restructure the directories.
What Makes a Good Test
----------------------
@ -109,7 +98,8 @@ If you wish to contribute tests, please consider the following guidelines:
comments.
* Python tests for bugs or features not yet fixed/implemented should be
decorated with: @unittest.skip("WaitingFor TW-xxxx")
decorated with: @unittest.skip("WaitingFor TW-xxxx"). We would rather have
a live test that is skipped, than no test.
How to Submit a Test Change/Addition
@ -123,34 +113,19 @@ TODO
For anyone looking for test-related tasks to take on, here are some suggestions:
* Take tw-285.t and improve it to test more (if not all) virtual tags, then
rename it virtual-tags.t.
* Find and eliminate duplicate tests.
* Select a bug.*.t Perl test and convert it to Python using the template.
* Using <attribute>.startswith:<value> with rc.regex:off still uses regex.
* Look at the latest todo.txt file format spec, and make sure that
import.todo.sh.t is thoroughly testing the current format.
* Select a feature.*.t Perl test, convert it to Python using the template,
then rename it to <feature>.t
* Find and eliminate individuals test that do the same thing.
* Using /pattern/ with rc.regex:off still uses regex.
* Import JSON validates absolutely no attribute. Create tests with bad data
in all fields.
* Import JSON validates absolutely no attributes. Create tests with bad data
in all fields, to exercise validation.
* DOM references are not validated, make this painfully obvious with tests
* Crazy dateformat values are not tested
* All add-on scripts should work
* Invalid UTF8 is not tested
* read-only data files are not tested
* all the attribute modifiers need to be tested, only a few are
* aliases are not well tested, and fragile