Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new workload router-perf-v3 #406

Open
mukrishn opened this issue May 23, 2022 · 2 comments
Open

new workload router-perf-v3 #406

mukrishn opened this issue May 23, 2022 · 2 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@mukrishn
Copy link
Collaborator

We run router-perf-v2 workload for data plane performance test and we need some enhancement to address a different behavior on managed-service OCP.
This workload creates pods and generates traffic within cluster (from hostnet pod) and on AWS Openshift http traffics from client get routed to an external Loadbalancer VIP and route back to the cluster.
But other platforms follow completely different approach, GCP/Azure route client traffics to their corresponding service clusterIP using IPtables DNAT policy, so the client traffic will not exit out of a cluster and reach server.

Probably its worth spending some effort on a new workload router-perf-v3(or add-on to v2) to run the client from an external source to replicate real-world scenario, however the consistency of results are affected due to known external variable(LB, client resources, client location, cloud variability), this way it follows same behavior on all platform and easier to compare results between them.

@mukrishn mukrishn added enhancement New feature or request help wanted Extra attention is needed labels May 23, 2022
@mukrishn
Copy link
Collaborator Author

Recently, we have been noticing inconsistency in the results on running router-perf-v2 on ROSA/AWS. Its high time to redesign this or add a new HTTP benchmark tool, slack convo

@mukrishn
Copy link
Collaborator Author

mukrishn commented Aug 9, 2022

The inconsistency in latency is due to an issue in mb tool reporting logic, it is recording a wrong value when the response is non-200 with socket_read(): connection error, latency calculation is delta between request and response timestamp but during this case it is calculating with wrong timestamp(0) and it is affecting the P99, P90 and avg latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant