Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manual Testing on LTR Releases and Release Schedule #239

Open
SrNetoChan opened this issue Dec 21, 2021 · 5 comments
Open

Manual Testing on LTR Releases and Release Schedule #239

SrNetoChan opened this issue Dec 21, 2021 · 5 comments

Comments

@SrNetoChan
Copy link
Member

SrNetoChan commented Dec 21, 2021

QGIS Enhancement: Manual Testing on LTR Releases and Release Schedule

Date 2021/12/21

Author Alexandre Neto (@SrNetoChan)

Contact [email protected]

maintainer @SrNetoChan

Version QGIS 3.22 +

Summary

By a PSC decision, it was decided that we as a project should go forward with the work started on #180.
This is a summary of what has been settled in a meeting with the following participants:

Release plan and messaging

  • LTR releases will no longer have monthly patch releases, but instead a 4 months cycle releases, coincident with the release o the stable version;
  • There will be no changes on the freeze schedules before the release date.
  • Before each release, the testing team (@SrNetoChan and @gioman) will prepare the test cycle creating the necessary/possible test plans for each platform.
  • At the release scheduled date, the package manager will prepare and make available new packages and standalone installers based on osgeo4w qgis-ltr-dev
  • These packages and installers will be considered Release Candidates and should not be advertised to the general public as final artifacts for production.

Testing period

  • Once the first artifacts are ready, it should be advertised in a call-for-testing message aimed at occasional testers that should test their workflows and volunteer testers that want to participate in the execution of the manual and semi-automated test cases organized in the kiwi platform (https://qgis.tenant.kiwitcms.org/plan/search/) by the testing team.
  • In the next few releases, this message will be prepared by the testing team and broadcasted by the PSC. In further releases it should be done directly by the testing team:
  • During the week after the release of the release candidates artifacts, the testing team will execute the test plans for each version and available platform. For now, the main objective is to execute all test plans on QGIS LTR on Windows 10. Other platforms and stable version will be tested depending on the available time and testers.
  • During testing, any issues found, will be reported directly on GitHub.
  • If possible, there should be some effort to fix issues found during the testing period.
  • Some issues may be considered to be blocking issue. In case of the existence of a blocking issue, the final release should be postponed until the bug is fixed. If the testing\bug triaging team finds a potential blocking issue, this should be advertised in the PSC or a panel of developers to decide if the issue is considered or not a blocking issue.
  • At the end of the testing week, if no blocking issues persist, new artifacts will be cut and released publically as final packages/installers
  • If no issues are found, the release candidates artifacts can be renamed and used as final artifacts
  • Non-blocking issues that were not possible to fix in time can be added to a list of know bugs (Not sure if we already have that somewhere)

Test plans

  • Until the next LTR release (February 18th) the testing team will expand as much as possible the number of test cases and prepare the test cycles for the next release
  • More tests can be added in the following releases
  • Specific tests can be added to a testing cycle if we are aware of any relevant changes or dependency bumps

Tester plugin

If possible, we will make use of the tester plugin in the following cases:

  1. Automated tests - run automated tests that were not possible to implement in CI
  2. Semi-automated test - to prepare the stage for a particular manual test or specific visual verification.
    In this case, the steps used to prepare the stage for the test cases, should have been tested manually in a manual test case!

Further Considerations/Improvements

Not mentioned in the meeting (subject to approval)

  • During the testing period, if packaging issues are found (and fixed), new packages should be cut to allow re-testing (@jef-n is this ok for you, we didn't mention this at the meeting, and I am not sure what this implies in terms of work for you)

Votes

@elpaso
Copy link

elpaso commented Dec 21, 2021

@SrNetoChan thanks!

Non-blocking issues that were not possible to fix in time can be added to a list of know bugs (Not sure if we already have that somewhere)

Just a quick note about that: maybe we might use the github tracker and tag the issues that affect a particular LTR release so that a list can easily and automatically generated.

@SrNetoChan
Copy link
Member Author

@SrNetoChan thanks!

Non-blocking issues that were not possible to fix in time can be added to a list of know bugs (Not sure if we already have that somewhere)

Just a quick note about that: maybe we might use the github tracker and tag the issues that affect a particular LTR release so that a list can easily and automatically generated.

Sounds like a good idea to me. Maybe milestones are better? Not sure how they are being used now...

@nyalldawson
Copy link
Contributor

In general this sounds good to me!

My one objection is:

If possible, we will make use of the tester plugin to make testers' life a bit easier and speed up the testing.
(The tester plugin can help prepare the stage for some manual test or specific verification)

I've some fundamental issues with how this plugin workflow has been designed, which I've described in qcooperative/qgis-core-tests#3. Specifically I think:

  • it's a mistake to try and automate these manual user tests. If a test CAN be automated, it should be done as a standard QGIS unit test so that we catch regressions before they ever land in QGIS.
  • if only part of a test can be automated (e.g. loading a dataset prior to getting a user to click a button in the GUI), then I still don't think this part should be automated. It cuts down the usefulness of the test a lot by doing this. Quoting from the ticket: "To me this needlessly restricts the actual scope of test -- instead of the batch test also covering things like testing that users can search for an algorithm in the toolbox and testing that selecting the batch option correctly opens the batch dialog without issue, we're instead skipping a bunch of these useful tests! So by trying to be helpful, we're ending up with a less valuable test all round..."
  • Trying to automate tests also puts up a barrier of entry for end users who want to make new user tests. Instead of just writing a test plan using plain old text like "load the layer using data source manager, add vector layer ,... " they now need to have experience with PyQGIS. So instead of writing test plans being something which feasibly ANY motivated QGIS user can do, we end up placing additional burden for writing tests on the same old subset of the community with development/git/... experience.
  • Lastly, writing automated tests just ends up with more code which needs maintenance.

For this reason I'd suggest we drop the tester plugin and and instead focus on just developing a comprehensive set of manual tests instead.

@SrNetoChan
Copy link
Member Author

@nyalldawson I understand your concerns (we chatted about it once), but I believe we can reach an agreement :) .

Manual testing is a very exhaustive work, moreover in software like QGIS with so many use cases and multiple approaches for the same task. We will never be able to cover all the functionality with fully manual testing, we will lack enough manpower, so we tried to find strategies to overcome that.

Nevertheless, for some of the reasons you mentioned (python tests are "hard" to create and maintain), most of our tests will be completely manual, step-by-step descriptions to test one or more scenarios. So, let's consider the use of the tester plugin as complementary to those manual tests. For each test case (scenario), we should evaluate its usefulness and effort. The preferable way to create tests should always be the following:

CI Tests > Manual tests > Semi-automated tests

Maybe some of the examples we have created were not clear. My apologies for that. Here are some situations where using the tester plugin can be useful, IMHO:

  1. Automated Integration tests. Sometimes tests could be fully automated and integrated in CI, but we need to use third party endpoints to be set, making it hard to do it on CI. We can use the manual descriptions to instruct the testers to prepare or configure the endpoints and then use the tester plugin to run the automated tests. Of course, that, if we discover that we can do it on CI, that's even better!!!

  2. Semi-automated tests. When the steps to prepare the stage for the actual test case or visual verification are too long or too complicated to be repeated over and over in each test, especially when those steps have already been tested manually in another test case. (we can make this last part a rule)

    I understand that this stage preparing steps could also be useful tests, but if we force the testers to repeat the same tasks over and over, we will be repeating ourselves and, in the end, limit the number of test cases we can cover.

  3. It's also useful for organizations who want to run their own automated tests, specific to their workflows. Like running a set of models in a new QGIS version and making sure they work and return the expected results.

Moreover, test cases and test plans should be dynamic. Instead of having a final set of test cases we use forever, we should keep evaluating the usefulness of each test case, if it fits better in CI, manual tests or semi-automated tests.

We should also evaluate adding or removing test cases for a specific release. Let's say that in a particular release we know we did some complex changes to a part of our code or a completely new feature, and we want extra testing, we should create more test cases for that particular feature or workflow. Or if we know we have bumped a dependency, let's try to make more tests that rely on it. Creating new test cases will be open to everyone, to include them in the test plans for a particular release must be a balanced decision, and we, "testing team", will definitely need the help of developers and package manager to better decide.

@elpaso
Copy link

elpaso commented Dec 22, 2021

@nyalldawsonI agree in principle but manual tests are often complex workflows with many preparation steps and this is the case where the tester plugin can become handy because it can automate the preparation steps and reduce the manual steps and finally the total testing execution time.

Another difference between CI tests and what the tester plugin can do is that the CI tests are mostly unit tests with a mocked QGIS app while the tests we are talking about here are run in the real QGIS desktop application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants