-
-
Notifications
You must be signed in to change notification settings - Fork 196
Testing
As the smartparens project evolved, we've tried several testing strategies, from the initial "freestyle" tests through some formalization back to freestyle back to formalization (sigh). I will describe the current practices and then add a few remarks about the "old style" suits which are still found (and run) in the repository (on the CI), but are discouraged to be added to.
Tests are organized in the test
directory. Tests are added to an "appropriate" thematic test file. When picking the right file, use these simple heuristics:
- each broader feature has its own file: parser, insertion, wrapping, commands...
- tests for specific language features go into their own file (
smartparens-<language>-test.el
)
Each test file must end with the suffix -test.el
. They are regular elisp files. Tests are defined using ert-deftest
. All ert
tests start with prefix sp-test
. All language-specific tests add the language name afterwards, for example sp-test-python-
.
We try to write tests with as little magic as possible, only abstracting the most annoying repetitions. So instead of adding endless layers of helpers, try to write the tests as plainly as possible to make it easy to throw them away and update or replace them as the conditions change.
There are two philosophical classes of tests: tests testing specific behaviour and data-driven tests. The former is usually employed to test features for specific language support, the latter for the internals, such as the parser, navigation commands and so on.
Behavioural tests should test one specific behaviour, for examples see smartparens-python-test.el
. The name of the test should reflect what is being tested.
Internals tests are usually data-driven. These include tests for the various parsers, for pair insertion and wrapping.
Most commands are tested using the sp-test-command
macro in smartparens-commands-test.el
file. This macro wraps the common pattern of:
- insert string,
- execute a command,
- compare with result string.
These tests are data-driven---we test each command by trying lots of input-output scenarios to make sure it works as expected.
Language setup and overriding of prefix arguments is also supported. It is best to read the existing examples to see how it works.
Finer support for specific languages usually gets one test per feature. If we change how '
pair behaves in python, we add a test for each added/changed behaviour. Each test has a representative name (within reason) to help us located the error.
All behavioral tests are usually best written using the sp-test-with-temp-buffer
helper method. It takes two required arguments, initial
which is a string to be inserted in a new temporary buffer, and initform
which is a form executed before the text is inserted. Then follow arbitrary forms.
There is special syntax available for initial
string:
- the point is set to the first occurance of
|
, this character is then removed - the mark is set to the first occurance of
M
, this character is then removed
Tests are always run with case-fold-search
set to nil
. Before iniform
is executed, input method is turned off. After initform
is executed, smartparens-mode
is turned on, the initial
text is inserted and point and mark are set. Then the rest of the code is executed.
Inside the body you can execute any code you wish with at least one assertion (see should
from ert
).
A special version sp-test-with-temp-elisp-buffer
is provided which automatically sets up emacs-lisp-mode
.
There are some remnants of the old system which contained lots of magic. With time it became clear that such an approach is not ideal for tests: it restricts too much and helps too little. If you find any test which you don't understand right away, changes are you are looking at the "old-style" suites.
Before you start writing your test read the previous section on "new-style" tests. If things still aren't clear, ask someone before you start implementing your tests to save both your and our time. We love questions, so don't worry! This project has a lot of cruft in it and you can't blame yourself for "not getting it" :P
We used ecukes
for testing way back when smartparens was simpler, but writing the support code turned out to be more problems than the framework solved. As the tests became unmaintainable we dropped them completely. The suits are still in the repository as we still want to migrate all the tests present there to new test suites. You are very welcome to help!
We do not accept new ecukes test features anymore.