Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Perf] Windows/x64: Regressions in System.IO.Compression #4318

Open
performanceautofiler bot opened this issue Jul 16, 2024 · 4 comments
Open

[Perf] Windows/x64: Regressions in System.IO.Compression #4318

performanceautofiler bot opened this issue Jul 16, 2024 · 4 comments

Comments

@performanceautofiler
Copy link

performanceautofiler bot commented Jul 16, 2024

Run Information

Name Value
Architecture x64
OS Windows 10.0.22631
Queue ViperWindows
Baseline 101c0daf5aa76451304704481a0d82d328498950
Compare 1164d2fe49449a914cc86f3b59973be7a60668fd
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in System.IO.Compression.Deflate

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
175.89 μs 233.63 μs 1.33 0.16 False

graph
Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.IO.Compression.Deflate*'

System.IO.Compression.Deflate.Compress(level: Fastest, file: "sum")

ETL Files

Histogram

JIT Disasms

Docs

Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository


Run Information

Name Value
Architecture x64
OS Windows 10.0.22631
Queue ViperWindows
Baseline 101c0daf5aa76451304704481a0d82d328498950
Compare 1164d2fe49449a914cc86f3b59973be7a60668fd
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in System.IO.Compression.Gzip

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
172.41 μs 206.58 μs 1.20 0.08 False

graph
Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.IO.Compression.Gzip*'

System.IO.Compression.Gzip.Compress(level: Fastest, file: "sum")

ETL Files

Histogram

JIT Disasms

Docs

Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository


@LoopedBard3 LoopedBard3 transferred this issue from dotnet/perf-autofiling-issues Jul 16, 2024
@LoopedBard3 LoopedBard3 changed the title [Perf] Windows/x64: 6 Regressions on 7/9/2024 12:32:58 AM [Perf] Windows/x64: Regressions in System.IO.Compression Jul 16, 2024
@LoopedBard3
Copy link
Member

LoopedBard3 commented Jul 16, 2024

@carlossanlop
Copy link
Member

Improvements are listed in the PR dotnet/runtime#104454

I also ran microbenchmarks in a variety of machines and I got these results myself, which I posted here:

The maintainers of zlib-ng shared some cases where regressions are expected, starting with this comment and a few more underneath: dotnet/runtime#102403 (comment)

But I have a question: Why am I seeing the exact same values between baseline and compare?

image

image

@LoopedBard3
Copy link
Member

That seems like a bug with the report generation, we will take a look into it. The numbers in the table on this issue look correct though so I would use those for the baseline value. You should also be able to get specific points after clicking on the graph in the report and hovering over the spot you want the value for.

@jeffschwMSFT jeffschwMSFT transferred this issue from dotnet/runtime Jul 17, 2024
@DrewScoggins
Copy link
Member

Improvements are listed in the PR dotnet/runtime#104454

I also ran microbenchmarks in a variety of machines and I got these results myself, which I posted here:

The maintainers of zlib-ng shared some cases where regressions are expected, starting with this comment and a few more underneath: dotnet/runtime#102403 (comment)

But I have a question: Why am I seeing the exact same values between baseline and compare?

image

image

@carlossanlop

Almost certainly you are seeing the same values for baseline and compare because you are looking at the all test history pages that we generate. When we added support for that we just used our existing report template, and it was designed for generating reports with different baseline and compare values. Hope this makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants