Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive profile floating-point computations extension #2078

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

idavis
Copy link
Collaborator

@idavis idavis commented Dec 26, 2024

Adds dynamic float support implementing the floating-point computations for the AdaptiveRIF profile. This also implements fcmp missing from the spec which was filed as a spec bug.

The fcmp instructions chosen were the ordered set. The spec says operations like dividing by 0 are undefined behavior and diving by 0 usually gives a NAN value. For any fcmp calls we need to choose ordered or unordered. I'm not sure that it matters which we decide.

This PR does not change the default profiles for the defined hardware targets.

@idavis idavis self-assigned this Dec 26, 2024
@idavis idavis force-pushed the iadavis/adaptive-float branch from 71011e3 to bcfe36d Compare January 2, 2025 21:22
@idavis idavis marked this pull request as ready for review January 2, 2025 21:22
compiler/qsc/src/target.rs Outdated Show resolved Hide resolved
pip/qsharp/_native.pyi Outdated Show resolved Hide resolved
///
/// This profile includes all of the required Adaptive Profile and Adaptive_RI
/// capabilities, as well as the optional floating point computation and qubit
/// reset capabilities, as defined by the QIR specification.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As prior comment, including Adaptive_RI already includes reset capabilities.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reworded them. The adaptive profile requires qubit reset now, so I updated the adaptive RI docs as well.

!8 = !{i32 1, !"classical_fixed_points", i1 false}
!9 = !{i32 1, !"user_functions", i1 false}
!10 = !{i32 1, !"multiple_target_branching", i1 false}
!4 = !{i32 1, !"int_computations", !"i64"}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this change in module flags metadata name from classical_ints to int_computations be a breaking change anywhere in our existing flow (including any service validation)? Hopefully Quantinuum can already handle either, as they helped write the spec, but maybe worth a check before changing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To my knowledge this won't affect anything and PyQIR has no idea whether either exist as the adaptive profile work hasn't been started. @swernli @cesarzc care to weigh in?

!8 = !{i32 1, !"classical_fixed_points", i1 false}
!9 = !{i32 1, !"user_functions", i1 false}
!10 = !{i32 1, !"multiple_target_branching", i1 false}
!4 = !{i32 1, !"int_computations", !"i64"}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have a test case in this folder that includes an .ll file using the new capability? (Which maybe means a new sample)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants