-
-
Notifications
You must be signed in to change notification settings - Fork 162
Spec Tests
andychu edited this page Dec 10, 2017
·
43 revisions
Back to Contributing.
Spec Tests are written with the sh_spec.py framework. There are some comments at the top of that file.
$ ./spec.sh install-shells
$ ./spec.sh smoke # a single file -- look at the list of functions
$ ./spec.sh all # all in parallel
The idea behind the spec tests to figure out how OSH should behave (the spec) by taking an automated survey of the behavior of other shells. I follow a test-driven process like this:
- Write spec tests for a new feature.
- Make the spec tests pass on every shell except OSH. If shells differ in behavior, this may require annotations on the expected results.
- A given shell may not implement a feature. For example,
bash
andzsh
both implement thedirs
builtin, butmksh
anddash
don't. - Shells may implement the same feature differently. For example,
pushd
in bash prints the stack to stdout, butpushd
inzsh
doesn't.
- A given shell may not implement a feature. For example,
- Write code in OSH to make the tests pass.
After step 2, all columns should be green or yellow, except OSH. After step 3, the OSH column should be green or yellow as well.
- Spec tests don't run in an isolated environment, but they should (issue 42). Right now I run them on Ubuntu 16.04.
- It's OK to check in tests that don't pass on OSH yet. This helps because it specifies the behavior we want to implement. However, spec tests should not be submitted until they are green/yellow on OTHER shells. (They can be disabled if the feature isn't implemented at all in a shell.)
- To prevent failing test runs, adjust
--allowed-failures
intest/spec.sh
. For example,--allowed-failures 3
will make thesh_spec.py
framework exit0
if there are exactly 3 errors. A failure with code1
blocks the release of OSH.
- To prevent failing test runs, adjust