Replies: 5 comments 12 replies
-
Well this would exactly be my understanding of the rules in CONTRIBUTING.md. If an implementation wants to be base, it has to use the algorithm as in the original solution and it has to start checking at 3. If it want's to be faithful it also has to stick to some memory management rules. So a comparison of results using base and being faithful should still be valid I think. |
Beta Was this translation helpful? Give feedback.
-
On top of that, Dave's leaderboard will only include solutions that use 1 bit per prime candidate. |
Beta Was this translation helpful? Give feedback.
-
I think what's considered faithful and what not is a bit of a nitpick when it comes to some details. Also the fact that |
Beta Was this translation helpful? Give feedback.
-
This is also my response to your comment on #398: the rules are indeed somewhat arbitrary. They're based on a) the 3 implementations that started this project, and b) what Dave has explicitly indicated is important to him (bit twiddling is one of them). They've been in place for a while now, and they've served us to the point that there are now well over 100 implementations in 53 languages in this repo dealing with them. |
Beta Was this translation helpful? Give feedback.
-
My expectation would be that the "final leaderboard" will be announced/discussed after Dave concludes discussing all the languages in the repo. Considering the fact we're at around 60 languages at the moment, that might take some time still.
PrimeBASIC/solution_1 could be used as an example. |
Beta Was this translation helpful? Give feedback.
-
The problem
I just took a look at the implementations for C, C++ & Rust as those are commonly known for getting highly optimized versions in benchmarks. And indeed there are quite highly optimized versions to be found.
But after looking at the C implementations, I was wondering at what point the implementations are no longer comparable.
In my opinion it is not really possible to compare the languages in any meaningful way, when some versions optimize the algorithm itsef in the way that most of the C implementations do. Of course the meaningful comparison is debatable in any case, but I would argue that it may still produce interesting results when all implementations use the same base algorithm.
The original implementation was optimized to skip even numbers which is quite reasonable and results in no cpu time overhead while calculating or counting since it just increments by 2 instead of 1.
Almost all C versions on the other hand further optimized the skipping part, with not only skipping even numbers but instead also skipping higher multiples of other primes (2, 3, 5) or even skipping multiples of all primes up to 13 in one example. This of course provides a significant boost to performance (at least the first few) but is now no longer comparable to the original version (and in turn any language that doesn't use the same optimizations). This optimization is of course quite clever and would be valid in any other case, but here it is basically just using existing knowledge to highly decrease the workingset to a much smaller size than the other implementations have to deal with. (Just skipping the multiples of 3 decreases the unchecked values from about
500_000
to only about166_667
)Besides the counting part now no longer being trivial (and not being timed), this of course makes C/C++ and in turn all languages with implementations that use this optimization seem much faster in relation to other languages than they are.
A language that is inherntly slower than another language could implement this change an in turn look faster on paper, while the other just doesn't have this optimization.
Considerations for a solution
Ok I don't really have a solution, but here are some of my considerations:
Beta Was this translation helpful? Give feedback.
All reactions