Replies: 3 comments 5 replies
-
To start with answering your question:
In actual fact, it was primarily created to allow for a clean comparison of the single-threaded implementations, which is the core set. Concerning your discussion of 1 and 2 I understand your perspective, but I don't fully agree. I think 2 can in fact be quite interesting when comparing the number of iterations per second per thread with that of a single-threaded implementation. That does say something about a languages's/runtime's overhead when using and managing multiple threads. |
Beta Was this translation helpful? Give feedback.
-
You can’t really “multithread” prime sieving all that well, what’s most effective is running the prime sieve independently on multiple threads. Rewarding sieve developers for running the same sieve on as many threads as possible seems like a poor decision, especially when the benchmarks are about what’s capable in the language & compiler. |
Beta Was this translation helpful? Give feedback.
-
i have write code for sieving using parallellism (1.0 e + 10) sieving in 0.90 sec and generate in a list in 30 sec. is that good? |
Beta Was this translation helpful? Give feedback.
-
I note that some solutions achieve massive amounts of passes, especially on Dave's 32 core machine, by simply running the sieve function in many threads. This does not strike me as measuring the right thing. Let me contrast it like so:
Here I find 1 being super interesting, and 2... well not so interesting.
What's the intention with having the multithreaded category in the benchmark?
Beta Was this translation helpful? Give feedback.
All reactions