Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need a testing stuite #2

Open
javierguerragiraldez opened this issue Apr 13, 2018 · 4 comments
Open

Need a testing stuite #2

javierguerragiraldez opened this issue Apr 13, 2018 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@javierguerragiraldez
Copy link
Member

There's no responsible way to advance development without a testing suite.

A full test framework is great for general development, but here we focus on just acceptance tests. Either pass or not; no need for detailed failing reports.

some desirable goals:

  • make it very easy to write tests. Every bugfix should include a failing test. Since Lua is a very readable language, it's sometimes easier to express a problem in code than in English. It should be trivial to turn this into the test.

  • minimal requirements. Lua is very portable, LuaJIT also targets a wide variety of platforms. All of them should be able to run all relevant tests. Every external module, tool or language will be an issue for somebody somewhere.

  • make it easy to run one test. very important while developing a bugfix. ideally, a testcase would be run directly from the CLI, not only from a bigger environment.

  • clearly mark tests that are intended for some platforms or have specific requirements. try not to confuse users with tests about JIT on non-jit builds.

Some candidates:

  • an existing test framework, like Telescope, Busted, etc.:
    pros:

    • very complete
    • nice reports

    cons:

    • usually not minimal, might have heavy requirements.
    • probably shouldn't be included in the repo, but marked as required for tests.
    • very different styles, if you like one, probably won't like another.
  • the LuaJIT-test-cleanup
    pros:

    • already existing
    • while not very deep in testng, it covers a wide area of the language and included libraries.
    • well known
    • light requirements (really? not sure)
    • nice 'tags' facility

    cons:

    • many warnings about not being "the test suite"
    • still need cleanup? not sure about this
  • some other test-driver (like https://github.com/tarantool/test-run), or one used by other Lua(JIT) fork.
    pros:

    • already existing and tried
    • very complete (in most cases)

    cons:

    • extra requirements (Python in the case of test-run)
    • not well known
  • a new, minimalistic tester (I've just hacked such a thing: https://github.com/javierguerragiraldez/luajit-tester):
    pros:

    • no extra dependencies
    • tailored for this specific use
    • very easy to write tests

    cons:

    • untested
    • limited functionality (to be improved with time?)
    • not well known (but minimal learning curve)
@javierguerragiraldez javierguerragiraldez added enhancement New feature or request help wanted Extra attention is needed labels Apr 13, 2018
@Gerold103
Copy link
Member

Thank you for the investigation. Despite of the fact that I am Tarantool team member, I can not recommend to use tarantool/test-run as is - it is huge and hard to support test runner, very linked with Tarantool in its code (yes, no direct dependencies, but in the code Tarantool is mentioned very often). But it has nice reports, built-in ability to run a test under lldb/gdb, and many other features. So it possibly makes sense to fork it and simplify by removal of Tarantool, fibers and other non-pure Lua things, msgpack.

From another point of view we must think about not only test runner, but about existing tests too. LuaJit has a lot of code, and it is almost impossible to write all the tests from 0 or to convert them from one suite to another. As I can see, LuaJIT-test-cleanup has many tests, and it would be nice to get them.

Telescope looks to be neglected - the last commit was pushed 5 years ago.

What about luajit-tester - my opinion is that we must write our own test runner only as a last resort.

@igelhaus
Copy link

Hi all,

I may be late to the discussion, but here are my two cents. When we started our own fork of LuaJIT, we chose following approach:

  • Adopt as many external test suites as possible.
  • Create a runner for each suite (if needed), and a single meta-runner for running them all. These runners were implemented as light-weight shell scripts not requiring extra dependencies.

This approach did result in a "heterogeneous" test environment, but on the other hand it brought us a considerable amount of Lua code to test against, and it did pay off: As far as I remember, adding each new suite helped reveal at least one stability bug.

Just in case, our first added suites were: the suite for vanilla Lua 5.1, LuaJIT-test-cleanup and Lua 5.1 tests shipped with the lua-TestMoremodule.

So I'd propose to split the issue into two:

  1. From the day 0, incorporate several external suites as is, without reorganising their guts, but with as simple wrappers as possible.
  2. Design the framework for the internal test suite (it can be a custom solution, or we can adopt an existing runner -- have no strong opinion here) and slowly start growing this suite.

@javierguerragiraldez
Copy link
Member Author

great tips!
those meta-runner scripts are easy to do as scripts, and I guess even .bat files, right? (i haven't done any scripting on windows since they used year numbers for home editions....)

@igelhaus
Copy link

those meta-runner scripts are easy to do as scripts, and I guess even .bat files, right?

We are Linux only, and yes, those runners are simple bash scripts (with some little fancy additions like output colouring), totalling 291 lines of code for running 6 suites.

I've also abandoned Windows development long ago, but I hope it is possible to implement the same functionality in .bat, too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants