Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limitation on how many tests can be excluded or included #78

Open
yanj-github opened this issue Apr 14, 2023 · 10 comments
Open

Limitation on how many tests can be excluded or included #78

yanj-github opened this issue Apr 14, 2023 · 10 comments

Comments

@yanj-github
Copy link

Ref: #77
As far as I understnad there shouldn't be any limitation on how many tests can be excluded.
However, when I am entering longer list of excluded tests (more than 100) the WAMS server hangs and won't process further.
Is there a way for user to configure a test session with longer excluded or included test list please?

@FritzHeiden
Copy link
Collaborator

Did you try using a local deployment using https://github.com/cta-wave/WMAS-deploy? The hosted instance is intended for demo usage, not for production.

@yanj-github
Copy link
Author

Thanks @FritzHeiden I have tried already on a local deployment version wmas2020 and it doesnt work for long list of tests either.
I was trying to run a subset of test from https://github.com/cta-wave/WMAS-subset/blob/main/subset_of_tests.json which contrains a long list of test to be included.

@pshorrock
Copy link

@FritzHeiden @louaybassbouss could we please get an update on this issue. @yanj-github is seeing an issue with trying to run any subset of tests greater than 100 in number regardless of where the WMAS TR is installed. Can you please confirm if you also see this issue, and if so, if there is a planned approach to resolve it (we are assuming and HbbTV subset will be run using the json file output from the HbbTV activities as an input, if not can you please confirm what the approach needs to be taken? How do you currently successfully run a large subset of tests (greater than 100 tests either added or excluded)?

@FritzHeiden
Copy link
Collaborator

@yanj-github @pshorrock I pushed a fix for this issue to wmas2021 branch

How do you currently successfully run a large subset of tests (greater than 100 tests either added or excluded)?

We are running large subsets by setting them up in the file system using shell scripts. This, however, is only applicable because the desired subset is fixed for a single wmas release. The exclude list was intended for removing problematic tests, which never exceeded 100, so we never came across this performance issue. Thanks for reporting.

@yanj-github
Copy link
Author

Tested and working on local set up.
Can you help to fix it on wmas2021, 2019 and 2018 please?

@yanj-github
Copy link
Author

@FritzHeiden Is this fixed on all other versions and applied to the cloud hosted version please?

@FritzHeiden
Copy link
Collaborator

I applied the fix to wmas2021, wmas2020 and wmas2019 code base and redeployed the new versions. For wmas2018 to apply the fix some more testing required

@bobcampbell-resillion
Copy link

Hi, what is the ETA on this ticket being resolved?

HbbTV 2.0.3 references WMAS 2018 and therefore receivers that would be expected to pass a subset defined by HbbTV need to exclude the tests in that list. It won't be a daily variable list, but it will change outside the cycle of WMAS updates - because HbbTV itself hasn't even finally approved/agreed the official subset - and I anticipate it will be updated from time to time.

Same problem for ATSC - they'll use the same subset but its conceivable they'll identify APIs/tests that need to be included/excluded independently of HbbTV.

And then I might as a manufacturer want to run a longer list, but not all tests in areas I know I don't support.

So for lots of reasons, users of the WMAS test tools need to be able to define an exclude list that isn't tied to the version of WMAS, and isn't too complex to apply to their running instance.. I guess its fine if that level of complexity isn't supported on the "non" production hosted versions but it needs to be supported somehow in deployed versions.

Thanks

@FritzHeiden
Copy link
Collaborator

I created a PR to update the test runner of wmas2018: #85 As wmas2018 is rather old and it was intended to keep it up to date with the latest runner there is quiet a diff between the versions. I will have to perform a few test runs to see if everything works as expected.

@FritzHeiden
Copy link
Collaborator

This is how we will proceed with this issue: The newest version of the test runner will be used to run the 2018 subset of tests. This makes it easy to apply fixes from the other versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants