mirror of
https://github.com/GothenburgBitFactory/taskwarrior.git
synced 2025-06-26 10:54:26 +02:00
![]() The default C++ version was updated in
|
||
---|---|---|
.. | ||
basetest | ||
docker | ||
scripts | ||
simpletap | ||
test_certs | ||
test_hooks | ||
.gitignore | ||
abbreviation.t | ||
add.t | ||
alias.t | ||
annotate.t | ||
append.t | ||
args.t | ||
backlog.t | ||
bash_completion.t | ||
bash_tap.sh | ||
bash_tap_tw.sh | ||
blocked.t | ||
bulk.t | ||
burndown.t | ||
calc.t | ||
calendar.t | ||
caseless.t | ||
CMakeLists.txt | ||
col.t.cpp | ||
color.cmd.t | ||
color.rules.t | ||
columns.t | ||
commands.t | ||
completed.t | ||
configuration.t | ||
confirmation.t | ||
context.t | ||
conversion | ||
count.t | ||
custom.config.t | ||
custom.recur_ind.t | ||
custom.t | ||
custom.tag_ind.t | ||
date.iso.t | ||
dateformat.t | ||
datesort.t | ||
datetime-negative.t | ||
debug.t | ||
default.t | ||
delete.t | ||
denotate.t | ||
dependencies.t | ||
diag.t | ||
diag_color.t | ||
dom.t.cpp | ||
dom2.t | ||
due.t | ||
duplicate.t | ||
edit.t | ||
encoding.t | ||
enpassant.t | ||
eval.t.cpp | ||
exec.t | ||
export.t | ||
feature.559.t | ||
feature.default.project.t | ||
feature.print.empty.columns.t | ||
feature.recurrence.t | ||
feedback.t | ||
filter.t | ||
fontunderline.t | ||
format.t | ||
gc.t | ||
helpers.t | ||
history.t | ||
hooks.env.t | ||
hooks.on-add.t | ||
hooks.on-exit.t | ||
hooks.on-launch.t | ||
hooks.on-modify.t | ||
hyphenate.t | ||
ids.t | ||
import.t | ||
info.t | ||
lexer.t.cpp | ||
limit.t | ||
list.all.projects.t | ||
log.t | ||
logo.t | ||
math.t | ||
modify.t | ||
nag.t | ||
obfuscate.t | ||
oldest.t | ||
operators.t | ||
overdue.t | ||
partial.t | ||
prepend.t | ||
pri_sort.t | ||
problems | ||
project.t | ||
purge.t | ||
quotes.t | ||
rc.override.t | ||
README | ||
recurrence.t | ||
reports.t | ||
run_all | ||
search.t | ||
sequence.t | ||
shell.t | ||
show.t | ||
sorting.t | ||
special.t | ||
start.t | ||
stats.t | ||
stress_test | ||
substitute.t | ||
sugar.t | ||
summary.t | ||
t.t.cpp | ||
tag.t | ||
taskrc.t | ||
tdb2.t.cpp | ||
template.t | ||
test.cpp | ||
test.h | ||
timesheet.t | ||
tw-20.t | ||
tw-46.t | ||
tw-262.t | ||
tw-295.t | ||
tw-1379.t | ||
tw-1637.t | ||
tw-1643.t | ||
tw-1688.t | ||
tw-1715.t | ||
tw-1718.t | ||
tw-1837.t | ||
tw-1999.t | ||
uda.t | ||
uda_orphan.t | ||
uda_report.t | ||
uda_sort.t | ||
undo.t | ||
unicode.t | ||
unique.t | ||
upgrade.t | ||
urgency.t | ||
urgency_inherit.t | ||
util.t.cpp | ||
uuid.t | ||
variant_add.t.cpp | ||
variant_and.t.cpp | ||
variant_cast.t.cpp | ||
variant_divide.t.cpp | ||
variant_equal.t.cpp | ||
variant_exp.t.cpp | ||
variant_gt.t.cpp | ||
variant_gte.t.cpp | ||
variant_inequal.t.cpp | ||
variant_lt.t.cpp | ||
variant_lte.t.cpp | ||
variant_match.t.cpp | ||
variant_math.t.cpp | ||
variant_modulo.t.cpp | ||
variant_multiply.t.cpp | ||
variant_nomatch.t.cpp | ||
variant_not.t.cpp | ||
variant_or.t.cpp | ||
variant_partial.t.cpp | ||
variant_subtract.t.cpp | ||
variant_xor.t.cpp | ||
verbose.t | ||
version.t | ||
view.t.cpp | ||
wait.t |
README ====== This is the task.git/test/README file, and contains notes about the Taskwarrior test suite. Running Tests ------------- Do this to run all tests: $ cd test && make && ./run_all && ./problems All unit tests produce TAP (Test Anything Protocol) output, and are run by the 'run_all' test harness. The 'run_all' script produces an 'all.log' file which is the accumulated output of all tests. Before executing 'run_all' you need to compile the C++ unit tests, by running 'make' in the 'test' directory. The script 'problems' will list all the tests that fail, with a count of the failing tests, once you have run all the tests and produced an 'all.log' file. Any TAP harness may be used. Note that adding the '--serial' option to ./run_all, all tests are executed serially. The default runs Python, C++ and Bash tests in parallel. Using '--serial' will make for a slower test run. Architecture ------------ There are three varieties of tests: * C++ unit tests that test low-level object interfaces. These are typically very fast tests, and are exhaustive in nature. * Python unit tests that are at the highest level, exercising the command line, hooks and syncing. There is an example, 'template.t', that shows how to perform various high level tests. * Bash unit tests, one test per file, using the bash_tap_tw.sh script. These tests are small, quick tests, not intended to be permanent. All tests are named with the pattern '*.t', and any other forms are not run by the test harness. Additionally a test must be set executable (chmod +x) for it to be run. In the case of Python tests one can still run them manually by launching them with 'python test.t' or simply './test.t'. It also allows us to keep tests submitted for bugs that are not scheduled to be fixed in the upcoming release, and we don't want the failing tests to prevent us from seeing 100% pass rate for the bugs we *have* fixed. Goals ----- The test suite is evolving, and becoming a better tool for determining whether code is ready for release. There are goals that shape these changes, and they are: * Increase test coverage by testing more features, more thoroughly. The test coverage level is (as of 2016-07-24) at 86.5%. * Write fewer bug regression tests. Over time, bug regression tests are less useful than feature tests, and more likely to contain overlapping coverage. * Eliminate obsolete tests, which are tests that have overlapping coverage. There is simply no point in testing a feature twice, in the same manner. What Makes a Good Test ---------------------- A good test ensures that a feature is functioning as expected, and contains both positive and negative aspects, or in other words looks for expected behavior as well as looking for the absence of unexpected behavior. Conventions for writing a test ------------------------------ If you wish to contribute tests, please consider the following guidelines: * For a new bug, an accompanying test is very helpful. Suppose you write up a bug, named TW-1234, then the test would be a script named tw-1234.t, and based on the template.t example. Over time, we will migrate the tests in tw-1234.t into a feature-specific test script, such as filter.t, export.t, whichever is appropriate. * Tests created after bugs or feature requests should (ideally) have an entry on https://github.com/GothenburgBitFactory/taskwarrior/issues and should include the issue ID in a docstring or comment. * Class and method names should be descriptive of what they are testing. Example: TestFilterOnReports * Docstrings on Python tests are mandatory. The first line is used as title of the test. Include the issue ID - there are many examples of this. * Extra information and details should go into multi-line docstrings or comments. * Python tests for bugs or features not yet fixed/implemented should be decorated with: @unittest.skip("WaitingFor TW-xxxx"). We would rather have a live test that is skipped, than no test. How to Submit a Test Change/Addition ------------------------------------ Mail it to support@gothenburgbitfactory.org, or attach it to an open bug. Wisdom ------ Here are some guildelines that may help: * If there are any lexer.t tests failing, then ignore all the others and fix these first. They are fundamental and affect everything else. One Lexer failure can cause 30 symptomatic failures, and addressing any of those is wrong. * If any of the C++ tests fail, fix them next, for the same reason as above. * If you are about to fix a bug, and no tests are failing, add tests that fail in a script named tw-XXXX.t. Later, someone will incorporate that test script into higher-level feature tests. * If the command line parser is not working, start by blaming the Lexer. * While the lowest level (C++) tests should be exhaustive, higher level tests should not do the same by iterating over the entire problem space. It is a waste of time. * If you find that you are combining two features into one test, you are probably doing it wrong. * If you add a feature, then add a test to prove it works, also add a test to prove it doesn't simultaneously generate errors. Furthermore test that with the feature disabled, or command line arguments missing, appropriate errors are reported. TODO ---- For anyone looking for test-related tasks to take on, here are some suggestions: * Find and eliminate duplicate tests. * Using <attribute>.startswith:<value> with rc.regex:off still uses regex. * Crazy dateformat values are not tested. * Invalid UTF8 is not tested. * All the attribute modifiers need to be tested, only a few are. * Aliases are not well tested, and fragile. ---