-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TDD: Document Unit Tests and automated UI Testing #167
Comments
Unit testing (not involving the UI framework) can fairly easily be done using GLib.Test and the meson testing facilities - see unit tests in Files for example. Testing the UX is much harder. |
@jeremypw agreed. I experimented a bit with GLib.Test and got the minimal infrastructure up & running. Probably still worth documenting, to encourage people writing tests and/or establishing a "standard way" (best practices) of organizing the code? FWIW here's what I came up with: |
Not sure you need the "IS_TEST" vala-arg - I did not use one but then I was only testing library code. Do you need to compile the whole source for the tests as you are only testing Utils.vala? |
It is a long time since I wrote the Files unit tests so there is probably a cleaner way of doing it. |
There can only be one main method in the executable, that’s why I had to „remove“ the Application‘s for testing. I probably don‘t need to compile the whole source code, but I thought that’s the easiest way to get started which needs minimal adjustments once the test scope increases. Regarding cleaner code: I’d love to use GLib‘s TestSuite and TestCase but was unable to figure out how that would work exactly. Having to declare the function and its name is a bit cumbersome - even though it’s quite simple. |
FWIW I found some more documentation about GLib Test: |
Was able to get What's weird is the output of the test report - even though I declared 2 tests (and I verified both are executed), it always claims it ran 1/1 tests - which is obviously wrong. Any idea why? $ ninja -C build test
ninja: Entering directory `build'
[0/1] Running all tests.
1/1 Tests OK 0.01 s
Ok: 1
Expected Fail: 0
Fail: 0
Unexpected Pass: 0
Skipped: 0
Timeout: 0 |
Because that count is based on the tests defined in your meson build scripts. Are there other existing Vala projects where running the tests produce the output you expect? |
@colinkiama just started out experimenting, so I'm fairly new to testing in Vala. Any hints, tips or best practices are appreciated. |
Sorry, I haven't spent much time testing in Vala either. Prior art covers what I would have mentioned already. |
That's why I queried whether you needed to compile Application.vala into the test executable when you are only testing Util.vala - seems to make it more complicated not easier. |
@jeremypw is there an easy way to remove the Applicaton.vala entry from the sources array? The only solution I found is to explicitly list all files needed for testing and - in case we have just one Test executable - this list grows to eventually contain "everything" except Application.vala, right? Or is it better to have one test executable for each unit we are testing and declare the sources there explicitly (which seems to add complexity but would probably improve the output)? |
@marbetschar in my projects I usually split the source files like this: https://github.com/manexim/home/blob/master/src/meson.build |
That is what I did and explicitly listed the files to be compiled. For a new project partitioning the code into testable and non-testable files from the start makes sense. |
I’m back with some more testing experience 😅. Here’s an example to that I think would be great look at: https://github.com/lcallarec/live-chart/blob/master/tests/meson.build My opinion is that tests should be defined in a separate Another proposal I have is each project “output” (library, executable etc.) having their own So the library’s tests would be unit tests and the app’s tests would be mainly for integration testing and UI testing. Lastly, I think that we should mention that the |
Problem
Currently we are not using automated testing a lot - I think this is mainly due to lack of knowledge. There's probably quite some potential in avoiding regressions, especially when we use a combination of automated UI and Unit testing.
Proposal
Document the preferred way of doing UI testing as well as Unit Testing
Prior Art (Optional)
The text was updated successfully, but these errors were encountered: