aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLukas Fleischer <calcurse@cryptocrack.de>2012-04-13 14:48:14 +0200
committerLukas Fleischer <calcurse@cryptocrack.de>2012-04-17 11:10:11 +0200
commit45f2b7628555bb8668649da4188174e95b76a737 (patch)
treee322a56ab4e07c6b1a91f92dccb95dcef8051515
parentec1227607949a556a44d177b66a7b2d50f6cbc4d (diff)
downloadcalcurse-45f2b7628555bb8668649da4188174e95b76a737.tar.gz
calcurse-45f2b7628555bb8668649da4188174e95b76a737.zip
test/: Add a README
Signed-off-by: Lukas Fleischer <calcurse@cryptocrack.de> Signed-off-by: Frederic Culot <frederic@culot.org> Signed-off-by: Erik Saule <esaule@bmi.osu.edu>
-rw-r--r--test/README121
1 files changed, 121 insertions, 0 deletions
diff --git a/test/README b/test/README
new file mode 100644
index 0000000..f0fd06a
--- /dev/null
+++ b/test/README
@@ -0,0 +1,121 @@
+calcurse Test Suite
+===================
+
+This directory holds of a couple of test cases and some helpers to simplify the
+task of creating new tests. Test cases are intended to test a specified set of
+behaviors and to avoid reintroduction of bugs that have already been fixed. The
+idea is that a new test should be added for each bug report that is received
+and for each bug that is fixed in the development branch.
+
+Running tests
+-------------
+
+The easiest way to run tests is running `make check`. This will prepare and
+compile all needed components and start all test cases. A summary is displayed
+when all tests are finished.
+
+You can also run tests manually. Test cases are usually shell scripts or
+binaries. To run an individual test, just invoke the corresponding executable.
+Note that some tests require the `CALCURSE` and `DATA_DIR` environment
+variables to be set, where `CALCURSE` should point to a valid calcurse binary
+and `DATA_DIR` should point to a valid data directory. We usually use the data
+directory `data/`, which is contained in the `test/` directory, for test cases:
+
+ $ CALCURSE=../src/calcurse DATA_DIR=data/ ./next-001.sh
+ Running ./next-001.sh... ok
+
+Passing another data directory might cause some failures since many tests are
+adapted for the `test/` directory provided by the test suite:
+
+ $ CALCURSE=../src/calcurse DATA_DIR="$HOME/.calcurse/" ./next-001.sh
+ Running ./next-001.sh... FAIL
+
+Writing tests
+-------------
+
+Writing test cases is as simple as creating a new shell script and adding some
+test code. Success and failure are reported by setting the exit status. Setting
+the exit status to `0` indicates success, a non-zero value indicates failure
+(which reflects the usual exit code semantics of POSIX systems).
+
+To enable a test, just add it to the `TESTS` variable in `test/Makefile.am`. If
+your test case is written in a non-interpretable language, you may need to add
+some compilation directives as well. Please note that we only accept
+POSIX-compatible shell scripts and C in mainline, so please avoid using other
+languages if you plan to get your test case integrated upstream.
+
+If your test case invokes the calcurse binary, please continue reading the
+following sections, also.
+
+The `run-test` helper
+---------------------
+
+The `run-test` helper is a simple C program that makes writing script-based
+test cases much easier. Tests for the calcurse command line interface usually
+invoke the calcurse binary with some special command line options and compare
+the output with a hardcoded set of expected results. Unfortunately, comparing
+the output of two commands is not exactly easy in POSIX shell: this is where
+the `run-test` helper comes in handy.
+
+If you run the `run-test` helper, you can pass one or more executable files as
+parameters. The helper then invokes each of the specified scripts twice: Once
+passing `actual` as a command line parameter and once passing `expected`. It
+then compares the outputs of both invocations and checks if they are equal or
+not. If the `actual`/`expected` outputs differ for one of the programs passed
+to `run-test`, if displays `FAIL` and exits with a non-zero exit status. It
+returns success otherwise.
+
+Here is a simple example on how to use `run-test`:
+
+ #!/bin/sh
+
+ if [ "$1" = 'actual' ]; then
+ echo 'obrocodobro' | sed 's/o/a/g'
+ elif [ "$1" = 'expected' ]; then
+ echo 'abracadabra'
+ else
+ ./run-test "$0"
+ fi
+
+If the script is run without any parameters, it simply invokes `run-test`,
+passing itself as a command line parameter (see the `else` branch). `run-test`
+then reruns the script, passing `actual` as the first parameter. This starts
+the actual test (see the `if` branch). It reruns the script a second time,
+passing `expected` as the first parameter which results in the script printing
+the expected result for this test (see the `elif` branch). Finally, `run-test`
+compares both outputs, prints a message indicating whether they are equal and
+sets the exit status accordingly. This exit status is then passed on to the
+original instance of the test script and returned since `./run-test "$0"` is
+the last command that is run if the script is invoked without any parameters.
+
+You should stick to this strategy whenever you want to check the output of a
+non-interactive calcurse instance in a test. Check the following tests for some
+more examples:
+
+* `todo-001.sh`
+* `todo-002.sh`
+* `todo-003.sh`
+
+Using libfaketime
+-----------------
+
+Some tests might require faking current date and time. We currently use
+libfaketime to achieve this. Check the following files for examples:
+
+* `appointment-001.sh`
+* `next-001.sh`
+* `range-001.sh`
+* `range-002.sh`
+* `range-003.sh`
+
+NOTE: Please do not forget to check for libfaketime presence at the beginning
+of your test. Otherwise, your test is likely to fail on systems that are not
+supported by libfaketime.
+
+Additional notes
+----------------
+
+Most tests, that invoke the calcurse binary, pass the `--read-only` parameter
+to make sure the data directory is not modified by calcurse, preventing
+unexpected side effects. Please follow this guideline if you plan to submit
+your patch upstream.