aboutsummaryrefslogblamecommitdiffstats
path: root/test/README
blob: f0fd06a97e97b800adc4830937d7c38327722eec (plain) (tree)























































































































                                                                               
calcurse Test Suite
===================

This directory holds of a couple of test cases and some helpers to simplify the
task of creating new tests. Test cases are intended to test a specified set of
behaviors and to avoid reintroduction of bugs that have already been fixed. The
idea is that a new test should be added for each bug report that is received
and for each bug that is fixed in the development branch.

Running tests
-------------

The easiest way to run tests is running `make check`. This will prepare and
compile all needed components and start all test cases. A summary is displayed
when all tests are finished.

You can also run tests manually. Test cases are usually shell scripts or
binaries. To run an individual test, just invoke the corresponding executable.
Note that some tests require the `CALCURSE` and `DATA_DIR` environment
variables to be set, where `CALCURSE` should point to a valid calcurse binary
and `DATA_DIR` should point to a valid data directory. We usually use the data
directory `data/`, which is contained in the `test/` directory, for test cases:

    $ CALCURSE=../src/calcurse DATA_DIR=data/ ./next-001.sh
    Running ./next-001.sh... ok

Passing another data directory might cause some failures since many tests are
adapted for the `test/` directory provided by the test suite:

    $ CALCURSE=../src/calcurse DATA_DIR="$HOME/.calcurse/" ./next-001.sh
    Running ./next-001.sh... FAIL

Writing tests
-------------

Writing test cases is as simple as creating a new shell script and adding some
test code. Success and failure are reported by setting the exit status. Setting
the exit status to `0` indicates success, a non-zero value indicates failure
(which reflects the usual exit code semantics of POSIX systems).

To enable a test, just add it to the `TESTS` variable in `test/Makefile.am`. If
your test case is written in a non-interpretable language, you may need to add
some compilation directives as well. Please note that we only accept
POSIX-compatible shell scripts and C in mainline, so please avoid using other
languages if you plan to get your test case integrated upstream.

If your test case invokes the calcurse binary, please continue reading the
following sections, also.

The `run-test` helper
---------------------

The `run-test` helper is a simple C program that makes writing script-based
test cases much easier. Tests for the calcurse command line interface usually
invoke the calcurse binary with some special command line options and compare
the output with a hardcoded set of expected results. Unfortunately, comparing
the output of two commands is not exactly easy in POSIX shell: this is where
the `run-test` helper comes in handy.

If you run the `run-test` helper, you can pass one or more executable files as
parameters. The helper then invokes each of the specified scripts twice: Once
passing `actual` as a command line parameter and once passing `expected`. It
then compares the outputs of both invocations and checks if they are equal or
not.  If the `actual`/`expected` outputs differ for one of the programs passed
to `run-test`, if displays `FAIL` and exits with a non-zero exit status. It
returns success otherwise.

Here is a simple example on how to use `run-test`:

    #!/bin/sh

    if [ "$1" = 'actual' ]; then
      echo 'obrocodobro' | sed 's/o/a/g'
    elif [ "$1" = 'expected' ]; then
      echo 'abracadabra'
    else
      ./run-test "$0"
    fi

If the script is run without any parameters, it simply invokes `run-test`,
passing itself as a command line parameter (see the `else` branch). `run-test`
then reruns the script, passing `actual` as the first parameter. This starts
the actual test (see the `if` branch). It reruns the script a second time,
passing `expected` as the first parameter which results in the script printing
the expected result for this test (see the `elif` branch). Finally, `run-test`
compares both outputs, prints a message indicating whether they are equal and
sets the exit status accordingly. This exit status is then passed on to the
original instance of the test script and returned since `./run-test "$0"` is
the last command that is run if the script is invoked without any parameters.

You should stick to this strategy whenever you want to check the output of a
non-interactive calcurse instance in a test. Check the following tests for some
more examples:

* `todo-001.sh`
* `todo-002.sh`
* `todo-003.sh`

Using libfaketime
-----------------

Some tests might require faking current date and time. We currently use
libfaketime to achieve this. Check the following files for examples:

* `appointment-001.sh`
* `next-001.sh`
* `range-001.sh`
* `range-002.sh`
* `range-003.sh`

NOTE: Please do not forget to check for libfaketime presence at the beginning
of your test. Otherwise, your test is likely to fail on systems that are not
supported by libfaketime.

Additional notes
----------------

Most tests, that invoke the calcurse binary, pass the `--read-only` parameter
to make sure the data directory is not modified by calcurse, preventing
unexpected side effects. Please follow this guideline if you plan to submit
your patch upstream.