Expected read and write performance of NVMe? #77
-
Hi I just programmed in the latest 2021.04 FPGA image and Yocto build. I am evaluating the PolarFireSOC for a product that writes Ethernet packets to an NVMe device. My initial basic benchmarking attempts using dd (dd if=/dev/zero) are showing quite slow write performances, in the order of 23MB/s. Do you have an internal benchmarks for NVMe write performance? At the moment I'm not sure if the limit is the CPU or the PCIe interface, I suspect the former. even though dd should be trivial, but I will be investigating this in the coming days. However if you have any internal documentation or benchmarks, could you share them Regards |
Beta Was this translation helpful? Give feedback.
Replies: 10 comments
-
Hey @diarmuidcwc we are investigating performance issues with PCIe at the moment as there have been a few reports of issues in relation to throughput. Have you seen this issue? I'm trying to find what information I can share with you in terms of benchmarks, I'll get back to you when I have something useful. Cheers, |
Beta Was this translation helpful? Give feedback.
-
Hugh I did just rundimentary test using dd and fio. Mostly dd as fio is very slow on the board.
I am getting a lot of timeout errors like this:
My NVMe has some activity LEDs and when doing this tests, the leds only briefly flash during the test, leading me to guess that the issue is not on the NVMe. It suggests that accesses are brief with long delays between them. This would be consistent with timeouts. |
Beta Was this translation helpful? Give feedback.
-
Yeah, the work we're doing in-house suggests we're CPU-bound, in general. We're digging into exactly why that is at the moment. For example, dd only runs on one CPU and each dd process maxes out, at the moment, around 29 / 30 MB/s. But the PCIe/NVMe system is still mostly idle. To see what I mean, you should be able to log in 4 times, for example 4 ssh sessions, and run the dd command above in each session, and get in or around the same performance (say ~20MB/s) on each job. |
Beta Was this translation helpful? Give feedback.
-
Thanks guys. Good news for me that others are seeing this ! |
Beta Was this translation helpful? Give feedback.
-
Hey @diarmuid our latest reference design and Linux releases contain changes for the PCIe which have shown performance improvements for NVMe drives if you want to give it a try and see if this also improves things for you? |
Beta Was this translation helpful? Give feedback.
-
Thanks. I'll check it out |
Beta Was this translation helpful? Give feedback.
-
Not so sure about this improvements. Maybe it's my particular setup. I took the 2021.08 FPGA image + wic. |
Beta Was this translation helpful? Give feedback.
-
Hi @diarmuid any luck with the other NVMe? |
Beta Was this translation helpful? Give feedback.
-
Anecdotally things seem more stable to me with 2022.02 release. I used to see pretty frequent corruptions interacting with NVMe (I've got a WD Blue 250GB SSD attached) but haven't seen any lately. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone in this thread, we have made a range of changes to our PCIe implementation with our latest release 2023.02. If you have been having issues please test with this release as we have resolved a number of problems related to card support. There may be performance improvements depending on your card and what you were doing. |
Beta Was this translation helpful? Give feedback.
Hi everyone in this thread, we have made a range of changes to our PCIe implementation with our latest release 2023.02. If you have been having issues please test with this release as we have resolved a number of problems related to card support. There may be performance improvements depending on your card and what you were doing.