-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 4.6.0 - Alpha 1 - Workload benchmarks metrics #18874
Comments
Issue updateI found an error in one of the final steps of the pipeline that should be solved for Alpha 2. It did not allow for the artifacts to be uploaded but after modifying it for this issue the following results were obtained: API performance results
|
ClusterNo errors were found in the PerformanceThe Performance tests ( Cluster tasks duration
ReliabilitySince the necessary changes to run the tests with Python 3.7 have not been introduced yet (wazuh/wazuh-qa#4478), these were performed locally: Reliability tests results
|
I see some failed tests (reliability), but in the conclusions above you mention there are no errors. |
Errors in
We'll need to review if there is any problem in remoted or in the cluster related to groups sync using sendsync, as you already mentioned. Everything else looks good to me. |
LGTM! |
The following issue aims to run all
workload benchmarks
for the current release candidate, report the results, and open new issues for any encountered errors.Workload benchmarks metrics information
Test configuration
All tests will be run and workload performance metrics will be obtained for the following clustered environment configurations:
Test report procedure
All individual test checks must be marked as:
All test results must have one the following statuses:
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the
Fixes
section of the current release candidate main issue.Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test.
An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details.
Conclusions 🟡
All tests have been executed and the results can be found here and here.
The following already reported defects were found:
API Performance 🟡
GET /manager/logs
fixed in Fix error requesting plain manager logs in the API #17946 and will be introduced for4.6.0
in Wazuh API is not capable of fetching Wazuh Manager logs #18939.Cluster 🟡
The cluster tests were run manually since some changes need to be introduced in order to be run in the pipeline (wazuh/wazuh-qa#4298 and wazuh/wazuh-qa#4478).
Reliability
Two failures were found in these tests. Already reported to be fixed:
test_cluster_connection
Unstable connection between master and workers according cluster.log in Reliability test wazuh-qa#4385test_cluster_error_logs
Error sending sendsync in Wazuh cluster #15802Performance
For a detailed conclusion and report on the cluster performance metrics please refer to #18874 (comment).
Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
The text was updated successfully, but these errors were encountered: