Releases: L3tum/CPU-Benchmark
v0.6.0
Right, this isn't the GUI update sadly. I'm kinda struggling with XAML right now, so I'm trying to avoid it. That delayed the whole thing quite a bit though.
Codename for this release is: "If I wouldn't want to have a GUI by Version 1.0 then this would have been 1.0".
I've also decided on a name for this application (since CPU-Benchmark is a bit too generic :P). Please say welcome to:
RIALBench
I thought it was a nice relation to "real" and I associate quite a bit with the name "Ria". I'll update the name in the following weeks everywhere.
Either way, here's some updates for the current version:
- Complementary Repos added to Readme (this is only housekeeping, but you can now go directly to the related repos)
- Switched to a different rating algorithm. Points are now from 0-10000 rather than all over the place like Geekbench does it. This should simplify interpreting points as well as updating benchmarks in the future
- Implemented benchmark scaling. Something you see in Cinebench for example. The volume of the benchmark is scaled with the number of cores working on it. Though not linearly, since that would increase the volume way too much. Tests have shown that the results are still representative and it should enable better benchmarks on single cores as well as big CPUs (> 12 Cores).
- Adjusted the default benchmark volume. Since benchmarks now scale there's no need to have a 5 minutes single-core run just because it would be a 5 second multi-core run. The goal is to keep each benchmark below 1 second on the reference CPU. This cuts down on the time taken to benchmark from roughly ~15 minutes (on my Intel Laptop) to ~3 Minutes.
- Replaced the on-the-fly generated HTML/JSON with real-world data. While the binary is a bit larger, it cut down on the benchmarking times for those benchmarks, as well as increased the representation value for them.
- Switched GC Mode from LowLatency to SustainedLowLatency. The former is not available on Windows Server, so this should enable the program working on Windows Server as well.
- Added an experimental throughput statistic, which is ignored when uploading your results but should give a somewhat accurate representation of the throughput in bytes you achieved per benchmark.
- Moved most of the communications stuff to the new Common Library (linked in the Readme). This greatly simplifies the communication between Benchmarker, Server and Website and should (theoretically) enable third-party websites as well as third-party benchmarks.
- Added a pure SHA-256 benchmark
- Improved the performance of the on-the-fly data generation. Since there's large amounts of data generated for the benchmarks this should improve the overall runtime a bit
- Added pregenerated random data to decrease the generation time a bit more
- Added ThreadAffinity and Priority settings which should decrease the fluctuation in results quite a bit
- Added stress tests for the extension benchmarks. There's more to come and I'm not quite happy with the implementation just yet, but it works.
- Added more AVX and SSE benchmarks
- Added new AVX and SSE categories
- Added new experimental L2CacheLatency benchmark
- Decreased the memory consumption of the decryption benchmark (it was quite insane)
- Improved extension benchmarks in general
- Refactored options parsing to increase code quality in Program.cs
- Bumped CommandLineParser version to latest
- Bumped HTMLAgilityPack to latest
- Bumped Common Library and HardwareInformation to latest
- Added automated Github and Docker release pipelines, let's see if they work
- On that note, also added multi-platform Docker images. Currently available are linux-amd64, linux-arm64 and linux-arm32v7
Bugfixes:
- Fixed a bug in the ZIP Benchmark
- Fixed a bug causing the progress bar to jump around after completion
- Fixed release pipeline 😄
v0.5.0
This is most likely going to be the last update of the year :)
Next focus will be the website and after that the GUI.
v0.5.0 (2019-12-23)
- Fixed an exception that could occur on older hardware that doesn't support one of the instruction extensions
- Added SSE2 (128-bit integer), AVX2 (256-bit integer) and FMA (fused multiply-add of 128-bit float) benchmarks
- Added arithmetic_double benchmark
- Added arithmetic_fp16 benchmark
- Added support for multiple categories per benchmark to better group them together
- Added uploaded field to save data
v0.4.1
v0.4.1 (2019-12-22)
Updates:
- Fixed exception that could occur when calculating the hash of the current saved data
- Added "category aggregator" that enables users to run each benchmark separately and if they got all for one category (or "all"), then that category is added (or updated) to the results
v0.4.0
Happy Christmas :)
Change Log
v0.4.0 (2019-12-21)
- Added clear option to clear all (locally) saved data
- Added upload option to upload last (valid) benchmark run instead of uploading regardless of situation
- Deprecated the
-q / --quick
option in favor of the upload option - Moved save data to hidden directory "_save" to make it easier managable
- Switched to only allow uploading once "category:all" (
--benchmark all
) has been run - Added option to view uploaded results in browser
- Simplified progress reporting
- Switched reference values to always refer to the all-core performance of the stock 3900X to simplify and unify the point system
- Reworked the categorization logic to clean up the code and fix some bugs
- Adjusted volumes of several benchmarks to make the run faster yet still comparable. A full benchmark run now takes ~60 seconds on the 3900X.
- Added comparisons to benchmarks. These serve as the new "reference" value that is only printed to the user rather than used in calculating the points. These can be easily expanded to more than SC/AC references.
- Reworked result saving logic to be more concise and easier on the user, while also being stricter and more secure against tampering done on the save
- Added better error messages
- Fixed some minor bugs that could pop up in specific situations
- Fixed memory leak that occurs when saving the results (which should only happen immediately before closing the program, but you never know)
v0.3.1
v0.3.0
v0.3.0 (2019-10-26)
Updates:
v0.2.0
v0.2.0 (2019-09-14)
Updates:
- Removed Brotli since it just took way too damn long
- Feature/prerelease #64 (L3tum)
- Switch to external lib #63 (L3tum)
- Update netcore.yml #62 (L3tum)
- Feature/improvements #60 (L3tum)
- Feature/rework benchmark structure #59 (L3tum)
- Feature/rework rater #58 (L3tum)
- Update build.ps1 #55 (L3tum)
- Added option to list all benchmarks #54 (L3tum)
- Update Readme.md #53 (L3tum)
- Update label-manager.yml #52 (L3tum)
- Add more actions #51 (L3tum)
- Update netcore.yml #50 (L3tum)
- Update netcore.yml #49 (L3tum)
- Add github action for simple build on pull request #48 (L3tum)
- Switch rating algorithm to linear #47 (L3tum)
- Feature/add changelog generator #46 (L3tum)
- Add result saver and machine information #27 (L3tum)
v0.1.1 Bugfixes
v0.1.1 (2019-09-03)
Updates:
First release :)
Initial release. Still lot to do but the mostly "synthetic" benchmarks reflect real-life workloads pretty good IMO. Feedback welcome though.
Instructions are in Readme or (slightly outdated) in the Wiki.