-
-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Syba Dual 2.5 Gigabit Ethernet PCIe NIC SD-PEX24066 #46
Comments
Wow awesome thanks for considering this project. If possible try running OpenWRT and see if you can pump 2.5Gbps Routing via one port to another (LAN-WAN). Or rather how much bandwidth it can handle. If it can do 2.5Gbps (not sure where the bottleneck will be) then it means the pi can act as a 2.5Gbps router, something many people would love to use once the openwrt build for pi 4 is stable. For now you will have to try snapshots (though in my experience they are already very stable) with afaik a kmod already present for the Realtek controller. |
Just a heads up TP-Link recently released world's first an 8 port 2.5gbe switch. I'm sure other manufacturers will follow suit now, about damn time. So you wouldn't need to re-wire everything. Your current wiring will 100% support 2.5gbe around the house. Don't need to convert everything to 10gbe, since SFP+ to RJ45 10gbe are expensive, just the main stuff like your main computer/nas etc. |
@vegedb - Yeah; and I've noticed a few motherboard manufacturers have slowly been introducing built-in 2.5G ports. I'm hopeful in a matter of 3-5 years we'll see most 'low-end' gear go 2.5 Gbps so people can start getting better-than-1 Gbps performance on existing networks. It seems like the chipsets are not that expensive and most of the time it's consuming the same 1x PCIe lane, so it's not a huge burden to switch. |
@geerlingguy RP4 has 4Gbps shared across its USB3.0 ports. Have you tested USB 3.0 2.5gbe adapters? Would be more realistic for those that don't have the compute module. |
@vegedb - Something like this CableCreation adapter might work with a Pi 4 model B, but I haven't tried one. In the case of this project, I'm testing different PCIe devices for two reasons:
|
@geerlingguy from your solution I think you may have helped solve the problem for the USB version too, unknowingly. Can't be sure until you or someone else tests it on the US B versions. |
@vegedb - Interesting! I just posted a follow-up comment in that forum topic, too. |
@geerlingguy Nice! Hope someone follows up. Btw regarding your iperf stopgap solution installed in merlin software, you could circumvent this by plugging your mikrotik 10gbps>RJ45 2.5G AX86U jack. You can't miss it, it only has 1* 2.5gbe port. Doing this will enable you to do some real world tests between clients, like transferring large files. |
Inserted card, booted Pi OS, and checked:
And in dmesg:
|
Recompiling on 5.10.y with |
Nice!
|
Next up, benchmarking. After that, figuring out OpenWRT. |
OpenWRT building is very friendly. A good idea to include Luci, Luci-app-sqm and the correct kmod for realtek to make configuring simple using GUI. Use a Ubuntu server VM cause generally its recommended to use only a single core while building openwrt sometimes I get errors while using multiple cores. |
They all have IPs!
|
I edited
Then on my Mac, I added a second interface with the same hardware so I could add a separate IP address on it (see MacMiniWorld's article). But... that doesn't seem to be working—so I might have to pull out one of the Mellanox cards and set up my PC desktop as the 2nd network endpoint for |
So benchmarking is... fun. I can't find a good way to get two separate interfaces on the Pi routing traffic to one network card on my Mac, so I now have a janky setup where I have a DAC running between my Windows PC (which was in the middle of being reworked a bit for some testing anyways) with a Mellanox ConnectX-2 card in it, plus a port going to my Mac's TB3 adapter using 10GBASE-T, and then the other two MikroTik ports going to the 2x 2.5 GbE connections on the Pi. So... with that sorted, I can confirm I can get 5.56 Gbps between my Mac and the Windows 10 PC (totally unoptimized—I'm using Windows' built-in driver because I had trouble installing the older Mellanox driver on Windows 10 Home (and there could be a varity of reasons for that). So next step is to make it so one of the interfaces is on 192.168.x.x and the other on 10.0.100.x, and each one can reach either my Mac or the Windows 10 PC, and then run iperf3 in server mode on each of those two. Sheesh. |
On my Mac, in Terminal:
On the Raspberry Pi, in two separate SSH sessions:
On Windows, in Powershell:
Results:
Total: 2.18 Gbps (without jumbo frames) across two interfaces. Not that impressive yet. |
Jumbo frames enabled between Mac and Pi (but on Windows I got wildly inconsistent results with Jumbo Frame in the advanced settings set to 9014 or any higher, so I kept it at 1514):
Total: 3.04 Gbps For some reason, I can't get the PC to give very consistent performance—and I don't know if it's the Mellanox ConnectX-2 card, the driver, the DAC cable (my fiber cable and transceivers would not light up the port), or what, so I'm guessing I could eke out another 100-200 Mbps since the IRQs are not a bottleneck according to |
This setup is horrifically annoying as I now have a spiderweb of cables through my office (I've only tripped on them once), so I don't think I'll keep it set up like this for benchmarking purposes. Suffice it to say, you're not getting much more than 3.1-3.2 Gbps through both 2.5 Gbps interfaces at once, so I'll put a pin in that benchmarking task for now. Next up is to see what I can do with OpenWRT... |
I couldn't leave well-enough alone. I downloaded the Windows WinOF Driver—which states it works with ConnectX-3, but in fact does also work with ConnectX-2—for Windows 10 64-bit, and finally got it installed. With that driver in place, I still couldn't get stable Jumbo Frame support, but I did see more stable speeds overall, especially with a private static IP, which for some reason would cause Windows' own driver to barf sometimes. And the results? Total: 3.195 Gbps (with Jumbo Frames on only one of the two interfaces) I was only ever able to get 3.220 Gbps total across 4x 1 Gbps ports on the Intel I340-T4, so I'm going to say ~3.20 Gbps is right around the upper limit of total network bandwidth you can get on the Pi for most network cards with more than one network interface. Indeed, even the straight 10 Gbps ASUS card can only punch through to 3.26 Gbps (see preliminary results in #15). So yeah. The Pi's not going to go beyond about 4.1-4.2 total Gbps of network throughput (onboard NIC included), and I think after testing like 8 different networking scenarios, I can say that with 99% confidence (outside some nutso building some I2C network interface and pumping through a couple more megabits!). |
Got my hands on this NIC today. Unfortunately, I can't recommend it due to the fact that it uses an old version of RTL8125 controller, which misses RSS, HW RX hashing and multiple TX queues support. It means that driver couldn't leverage multiple A72 cores that RPi has. All the interrupts of both controllers are tied to Core #0 and Linux can't distribute RX flow processing between different cores efficiently. I'm gonna try to get IOCREST 2.5Gbps NIC, which is based on RTL8125B and should have all the features supported. Still, I'm not sure that RPi interrupt controller will allow transferring IRQ processing to different cores... Anyway, I've noticed couple of things that might improve performance:
|
Attaching patch that would allow using Realtek driver under openwrt. Don't forget to enable it in make menuconfig and disable the kernel one. |
@dmitriarekhta - After a ton of back-and-forth discussing the IRQ affinity issues for the Intel i340, we found out that it is impossible to spread interrupts over multiple cores on the Pi, so that's always going to be a limiting factor. That's why unless the hardware does support some of the more advanced features, you can't saturate the Pi's ~3.4 Gbps PCIe lane using network packets unless you use jumbo frames :( The 10G ASUS adapter I tested seems to be able to support higher speeds with normal frames. |
@dmitriarekhta - Hmm, after re-reading your comment—were you able to get affinity spread out over all four cores? If so, that sounds like it would be a huge boost for performance. |
@geerlingguy did you ever try this dual 2.5GB card using OpenWrt? I've been using wolfy's OpenWrt built on rpi4 paired with a USB3 Realtek 1GB NIC with great success for routing 1GB at wire speed. I had to manually assign CPU affinity for the NICs which was trivial on OpenWrt. The CPU cores sit at ~35% during heavy routing workloads. My hope is that the Sybia Dual 2.5GB card can route around 1.5GB, that's the link my ISP provides via a cable modem (with 2.5GB port) and Redshirt Dan wants all of the bandwidth that I'm paying for :-) |
@That-Dude - I have not yet, but still have it in my list of 'projects I want to try out'. |
Cool I'll keep an eye out. Love your YouTube channel man. |
great stuff! I'm eagerly awaiting any additional information! |
I am using the card on the rpi 5 with openwrt and it works great with 1 gig fiber internet, It would be great if someone with faster internet could test it.
|
@leobsky - If you have another computer on the network that's wired with > 1 Gbps connection, you can try running |
Well, if one is good (see #40), two is surely better, right?
Originally inspired by chinmaythosar's comment, then bolstered by ServeTheHome's review, and finally encouraged by the ease of testing (besides accidentally blowing the magic smoke out of the first card I bought) in #40, I've decided to buy a new Syba Dual 2.5 Gbps Ethernet PCIe NIC, which has not one but two Realtek RTL8125's. On top of that, it has a PCIe switch built-in, the ASMedia ASM1182e.
So it would be neat to see if this card works out of the box with the same ease as the Rosewill card I tested without a PCIe switch and with only one port.
If it does work, it will be interesting to see how many bits I can pump through (especially testing overclock and jumbo frames)—can it match the 4.15 Gbps performance of the Intel I340-T4 + internal Gigabit interface? I'm guessing not, at least not by itself, because it seems there's a hard limit on the PCIe bus that's reached well before the 5 Gbps PCIe gen 2 x1 lane limit.
The text was updated successfully, but these errors were encountered: