-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing #182
Comments
As an example, the browsers uses the web-platform-tests project maintained by the Interop team as the test suite for the Web-platform stack. web-platform-tests.org contains introduction of the test suite. wpt.fyi is an archive of test results collected from a few web browsers on a regular basis. |
And here are some non-browser standard test examples: |
I've been checking how similar specifications have dealt with tests and I like how the EPUB group does it. We cannot reuse the web-platform-tests directly because of the different nature of the user agents, so I propose to define something like the EPUB tests. It would be a dedicated GitHub repository (e.g.,
The scripts auto-generate the documentation and the reports in human-readable format, including Information of the test suite. All is maintained in the repository. Something important is this clear methodology of how contribute (i.e., pre-requisites, workflow based on issues, templates to use, etc.). Of course we don't require an automatic process (we could just use tables or a spreadsheet), but this could help to maintain the tests in mid-long term. Comments? If you like the approach, I can initiate the first proposal based on this approach. |
Thank you for the proposal. Sounds like a good plan to me. |
I've worked on a proof of concept to show and explain what this approach would be like. As mentioned in my previous comment, this methodology and system are based on the EBUB tests. This methodology and tool are open to any contributor so that anyone can create tests on specific parts of the specifications. All the maintenance would be in Github, and the documentation update is done using GitHub CI actions. They are already included in the repository example. The final result is something like this: https://espinr.github.io/miniapp-tests/ How does it work?
Every test case:
For instance, a simple test for the MiniApp Manifest's
The definition of the test (see test.jsonld) would be something like this: {
"@context": { },
"dcterms:rights": "https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document",
"dcterms:rightsHolder": "https://www.w3.org",
"@type": "earl:TestCase",
"dc:coverage": "Manifest",
"dc:creator": ["Martin Alvarez"],
"dc:date": "2022-05-25",
"dc:title": "Fullscreen enabled in manifest",
"dc:identifier": "mnf-window-fullscreen-true",
"dc:description": "The window's fullscreen member is set to true in the manifest. The app must be shown in fullscreen.",
"dcterms:isReferencedBy": [
"https://www.w3.org/TR/miniapp-manifest/#dfn-process-the-window-s-fullscreen-member"
],
"dcterms:modified": "2022-05-25T00:00:00Z"
} This definition uses JSON-LD but we can simplify it. After updating the repository, the GitHub CI action will generate the documentation, resulting in something like this: https://espinr.github.io/miniapp-tests/#sec-manifest-data . As you can see, I've only included examples for three sections: packaging, content, and manifest. The documentation organizes the content accordingly. In the generated documentation, each test case is represented on a row. It is linked to the code itself (including the metadata that auto-describe the use case), the specification's feature to be tested, and the results of the tests. How to perform tests?Every test should be tested on any MiniApp platform, one by one. For instance, testing the miniapp in the previous example and noting if the result is the expected one. Results could be The testing results for each platform are specified in a simple JSON file like this: {
"name": "Mini Program #2",
"ref": "https://example.org/",
"variant" : "Cross Platform",
"tests": {
"cnt-css-scoped-support": true,
"mnf-window-fullscreen-default": true,
"mnf-window-fullscreen-true": true,
"mnf-window-orientation-default": true,
"mnf-window-orientation-landscape": true,
"mnf-window-orientation-portrait": true,
"pkg-pages-same-filenames": false,
"pkg-root-app-css-empty": true
}
} This sample platform (called Mini Program #2) passes all the tests except one. The results, linked to the documentation, are represented visually in a table. The testing results for two different miniapp vendors (see all the sample reports) are in this document https://espinr.github.io/miniapp-tests/results.html I'll be happy to present this idea at the next meeting. If you have suggestions, I'll be glad to update this proposal. Please, note that this testing methodology is complementary to a MiniApp validator, as proposed in the previous meeting. EDIT: I've created an example that shows hot to link the tests from the specifications (see the links to the test in this section of the packaging spec) |
This proposal was presented during the last CG and WG meetings. No objections were raised, so I suggest we move forwards with this proposal so we can start testing as soon as possible to detect the weakest points in the specs. I think the best way is organizing all the miniapp tests under the same repository. We can use something like |
Sounds good to me. Do you want me to create the repo? |
Yes, please. |
Great, thank you!! We can leave this issue open to collect and discuss the ideas of the MiniApp validator discussed in previous meetings. |
Through my discussion with some vendors, i.e. Alibaba, Baidu, Huawei, etc, they prefer to set up a formal open-source project and have a professional open-source community to supervise the project. Since it can coordinate more resources to participate and facilitate the organization, supervision, and management of testing, especially for developers, who can have relevant test references to guide their practice. Therefore, at the last WG meeting, we discuss some proposals to set up this project, the following issues may need further discussion (Attach some fruit for thought)
1- Mustard.
Others? |
We need to have a cross-vendor test suite for MiniApp specs. The tests can be used as proof that the MiniApp user agents have implemented the W3C specs. As a result, MiniApp developers can also write standard MiniApps with greater confidence.
If possible, we can design a framework to run tests automatically. If that's not possible, we need to write documentation for running tests manually.
The text was updated successfully, but these errors were encountered: