Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find Peak Flow Mods per Second in various scenarios. #193

Open
Ktmi opened this issue Jul 30, 2024 · 4 comments
Open

Find Peak Flow Mods per Second in various scenarios. #193

Ktmi opened this issue Jul 30, 2024 · 4 comments

Comments

@Ktmi
Copy link

Ktmi commented Jul 30, 2024

This is less an issue, and more of a request for performance measurements. In #187, @viniarck requested I check how pacing affected the flow mod rate in various scenarios. However, we never tested for what the peak flow mod rate would be for flow manager, nor did we test how the max flow mod rate would be affected by latency between the switch and the controller.

@Ktmi
Copy link
Author

Ktmi commented Aug 6, 2024

So for my experiment, I decided to try deploying 10,000 flows in a single request via the HTTP api. Here's the IO graph of the packets flow mod packets captured, with a sampling interval of 1 second.
flow_deployment_max

This looked to be about 500 is flow mods per second, but didn't really explain the peaks and troughs, so I decided to look at it at a finer sampling interval of 50ms:
flow_deployment_max_finer

What we see here is that Kytos is able to sustain sending between 30 and 40 flow mods every 50 ms (or about 600 to 800 flow mods a second), but then has to stop sending flow mods for about a half second.

To see if the pauses was a kytos issue, I checked on all traffic going through the LO, to see if there was a pause:
flow_deployment_max_all

Turns out there was. Maybe this is because I'm running on a VM, and when the hypervisor does a context switch on the cores the VM is running on no packets can be sent.

@viniarck
Copy link
Member

viniarck commented Aug 6, 2024

@Ktmi, that seems similar to my local results, good to have those values documented. I was seding 10k flows for 1 dpid in a single request, the flows mods on wire on a / sec scale, peaked at 1000 (a bit above than what you've found, I'm not running on a VM), and steady with 600 flow mods / sec. So, yes, around 500 flow mods / sec is a good guideline.

20240806_141648

Instrumenting the msg out handler, it also reflected that indeed the number of ops was around 60000 / 60 = 1000 (which matches the peaks at 1000)

20240806_141507

In a scale of 50 msec, I didn't notice suspensions as you had, but it wasn't steady either

20240806_141947

Let's stick with this 500 flow mods / sec per dpid recommendation. @Ktmi, can you double check if sending for more than one dpid if it still sustains a similar ratio?

@Ktmi
Copy link
Author

Ktmi commented Aug 7, 2024

@viniarck We already have evidence against being able to maintain the same ratio when sending to multiple dpids. In this post, one of the test cases visualized was sending 1000 flow mods to 9 different switches, and the combined flow mod rate was about the same as what I got here, around 500 per second, with the per switch somewhere around 50 per second. It seems that we have a global cap across all DPIDS for how fast we can send flow mods.

@viniarck
Copy link
Member

viniarck commented Aug 7, 2024

@viniarck We already have evidence against being able to maintain the same ratio when sending to multiple dpids. In this post, one of the test cases visualized was sending 1000 flow mods to 9 different switches, and the combined flow mod rate was about the same as what I got here, around 500 per second, with the per switch somewhere around 50 per second. It seems that we have a global cap across all DPIDS for how fast we can send flow mods.

Right. That's a good data point. OK. Let's keep this in mind, and move on. Let's recommend the 500 flow mods / sec, including globally, and when that's not enough we'll see what else can be done. But, that's already very helpful to to determine certain pacers ceiling. Good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants