Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add threat model #2033

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Add threat model #2033

wants to merge 1 commit into from

Conversation

dstebila
Copy link
Member

@dstebila dstebila commented Jan 3, 2025

As discussed in the 2024-12-05 TSC meeting, we want to update our SECURITY.md file to mention our threat model. I have based our threat model on the one in the OpenSSL security policy.

Feedback and discussion welcome.

Signed-off-by: Douglas Stebila <[email protected]>
@dstebila dstebila requested a review from a team January 3, 2025 19:53
@dstebila dstebila self-assigned this Jan 3, 2025
For certain algorithms and platforms, liboqs aims for "constant-time code" in the sense of no branching or memory accesses based on secret data.

- Algorithms: The [algorithm datasheets](https://github.com/open-quantum-safe/liboqs/blob/main/docs/algorithms/) indicate whether each algorithm and implementation aims for constant-time behaviour. These should be read in conjunction with the [issues/passes files for each algorithm's constant-time tests](https://github.com/open-quantum-safe/liboqs/tree/main/tests/constant_time).
- Platforms: We aim for constant-time behaviour on our [Tier 1 platforms](https://github.com/open-quantum-safe/liboqs/blob/main/PLATFORMS.md).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little hesitant to say that we aim for constant-time behaviour on all Tier 1 platforms, given that we only run constant-time tests on Linux / x86_64. There's a lot of aarch64 code that isn't constant-time tested, and we certainly don't run the CT tests on macOS. (Does Valgrind even work on macOS?) Maybe it would be best to say that we will respond to reports of CT issues on Tier 1 platforms.

I did actually try out the CT-time tests on the Arm runner late last year, and the results were OK: just a handful of reports on Falcon and McEliece that I haven't had time to pick through. Maybe this is a signal to take up that work again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If valgrind works on macOS I've never gotten it to work. 😓

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I was hoping to tie the threat model back to our supported platforms in some way, but I guess I didn't get it right. I guess yes, we could say that we will respond to reports of CT issues on Tier 1 platforms, and for all other tiers of platforms it is nice-to-have but not actively supported (or some other wording?).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PLATFORMS.md file has a † next to platforms on which constant-time tests are run—maybe we can refer to this in the threat model and make an effort to expand the list past x86_64 / Linux.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does raise an interesting conversation. Does it make sense that some tier 1 platforms be supported in different ways (in this case constant time checks) than others? Maybe it would be worth having a set of tier 0 platforms? Forgive me for not being super active so there might be other cases where we support some tier 1 platforms in different ways than that other tier 1 platforms, but having a tier 0 "we aim to support everything on this platform" might be a nice to have distinction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the proposal @SWilson4. The concrete actions taken and listed are OK and correct (though still not precise enough in my eyes: which platforms are covered by the term "select", for example?).

The leading statement I think is still too broad a promise, though:

Timing-based side-channel attacks are within the scope of our threat model.

as the trailing caveats document. Therefore, the leading statement should be "qualified" at least along the lines of

Some timing-based side-channel attacks are within the scope of our threat model.

And re-reading the caveat again

OQS will make a best effort to respond to reports of non–constant-time behaviour that fall outside the scope of these tests.

I wonder what this "best effort" entails? Will we change code that OQS did not author? Will OQS only revert to upstreams offering such maintenance? What if no-one is available to take a look?

This collection of "some", "select", "best effort" makes me wonder whether this section is not too weak to retain it at all: Would users read any reliable commitment out of this? Is it thus worth while retaining in the first place?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the proposal @SWilson4. The concrete actions taken and listed are OK and correct (though still not precise enough in my eyes: which platforms are covered by the term "select", for example?).

My intention with "select" was to refer to PLATFORMS.md via the link, which has more detail on which platforms have the constant-time tests. What about this wording:

"OQS tests for secret-dependent branches and memory accesses on select Tier 1 platforms. See PLATFORMS.md for details."

The leading statement I think is still too broad a promise, though:

Timing-based side-channel attacks are within the scope of our threat model.

as the trailing caveats document. Therefore, the leading statement should be "qualified" at least along the lines of

Some timing-based side-channel attacks are within the scope of our threat model.

And re-reading the caveat again

OQS will make a best effort to respond to reports of non–constant-time behaviour that fall outside the scope of these tests.

I wonder what this "best effort" entails? Will we change code that OQS did not author? Will OQS only revert to upstreams offering such maintenance? What if no-one is available to take a look?

I think that patching efforts will vary greatly depending on the nature of the report. At a minimum, we should clearly document and disseminate information about the bug—as we have done for constant-time test failures for pre-standards algorithms like McEliece or Falcon. That said, would details about our response not be better suited for the security response process document?

This collection of "some", "select", "best effort" makes me wonder whether this section is not too weak to retain it at all: Would users read any reliable commitment out of this? Is it thus worth while retaining in the first place?

I think this section clearly
(1) lays out what we do with regards to automated CT testing,
(2) explains the scope and limitations of our tests, and
(3) surfaces information about known test failures.

In my view, this is valuable information for users to have. What if "best effort" were replaced by redirecting to our (upcoming) security response process document?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating on the more detailed wording. I tried to incorporate observations from the discussion in the revised text below, changing the first sentence (adding the word "Some") and the last sentence.

Some timing-based side-channel attacks are within the scope of our threat model. OQS tests for secret-dependent branches and memory accesses on select Tier 1 platforms. All test failures are documented as either "passes," which we have assessed to be false positives, or "issues," which may constitute non–constant-time behaviour. The algorithm datasheets indicate whether or not an implementation passes our constant-time tests. Details about passes and issues are available in the tests/constant_time directory.
These tests do not encompass all classes of non–constant-time behaviour; for example, they do not detect possible variable-time instructions, such as DIV. Reports of non–constant-time behaviour that fall outside this scope will be considered on a case-by-case basis.

Regarding Michael's comment

This collection of "some", "select", "best effort" makes me wonder whether this section is not too weak to retain it at all: Would users read any reliable commitment out of this? Is it thus worth while retaining in the first place?

I think the goal right now is to clearly document our current practice so that users can make their own decision. I think the text you've been iterating on is pretty close to reflecting our current practice. As to whether users will find it a satisfyingly reliable commitment -- well, that's up to them. If they want a stronger commitment, then hopefully that will drive them to participate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If they want a stronger commitment, then hopefully that will drive them to participate.

To move this beyond "hope", what about making this explicit, then? Maybe with an introduction to the whole document reading like this:

This project is not commercially supported. All guidelines and goals stated below are reflections of current practices, executed by a community of voluntary contributors on a best effort basis and may change at any time.

Any entity seeking more reliable commitments is strongly encouraged to join our community and thus enhance the code and support that the community can provide.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like a blanket statement that applies more broadly than the threat model. Is there a better place for it?

@SWilson4 SWilson4 requested a review from a team January 3, 2025 21:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants