Table of Contents
- Before you begin
- Set up the vulnerability management infrastructure
- Create a vulnerability management team (VMT)
- Publish your vulnerability management process
- Apply the vulnerability response process
- Troubleshooting common challenges to Coordinated Vulnerability Disclosure
- Acknowledgements
This guide is intended to help open source project maintainers create and maintain a coordinated vulnerability response process.
No software is perfect. Software is written by people or artificial intelligence, and both sometimes make mistakes. This is true for closed source as well as open source software. For open source software (OSS), this problem can be more challenging due to the very factors that make OSS so powerful - highly distributed development by multiple contributors. At some point in the life of your project, someone--a user, a contributor, or a security researcher--will find a vulnerability that affects the safety and usefulness of your project. Applying this guide will help you be ready.
This guide was produced by the contributions of individuals and the Open Source Security Foundation Vulnerability Disclosure Working Group. The OpenSSF Vulnerability Disclosure Working Group believes coordinated vulnerability disclosure (sometimes called "responsible disclosure") is the appropriate model for most open source projects. The advice in this guide follows that model. Not all advice in this guide applies to all open source projects; projects may need to modify the recommendations and materials to fit their project.
A vulnerability reporter, aka finder, is someone who reports a vulnerability to a project. There's no single example of how security issues are reported or why people report them. That's one of the things that can make vulnerability management and disclosure tricky: the human on the other side is, well, a human who has their own wants, needs, and interests out of vulnerability disclosure. This is one reason why the phrase "coordinated vulnerability disclosure" (CVD) is now preferred over "responsible disclosure." There are, in general, at least two parties involved, the reporter(s) and project representative(s), and you need to coordinate and work together!
The OpenSSF Vulnerability Disclosure working group uses personas to describe persons/roles that interact within the CVD process. A vulnerability reporter is one of those personas and is often called a finder within the security community. This reporter persona represents many different diverse and divergent desires around reporting a security vulnerability to a developer.
Very broadly, reporters fall into two camps: those with a direct connection to the project and those with an indirect connection. Direct reporters are active users of the project or were hired to do work on behalf of a direct user. Because they are direct users, or work for one, direct reporters are strongly motivated to ensure that an issue is patched and smoothly rolled out. They might want to help develop and test a patch—they have a reason to see the problem through to a fix.
Indirect reporters may be security researchers, people doing penetration testing or security audits, or may stumble across an issue in your project due to chasing a problem in a dependent project. They may want to be highly involved in the patching and disclosure process, including coordinating publicity for their work, or just want to send over the issue and not be involved further.
Note that in some cases the reporter might actually be a contributor or even a maintainer of the project. In such cases coordination should be easy but the same process should basically be followed. In particular, proper public disclosure should still be done so that all interested parties are made aware and can take proper corrective action.
Reporters have done your project a favor by telling you about a vulnerability. Reporters will have many possibly complex motivations and desires in reporting a potential vulnerability to a project. Unfortunately, there are incentives not to report vulnerabilities, sometimes including cash. Having clearly established expectations around vulnerability reporting and the project's triage and handling practices will help eliminate friction as the issue moves towards resolution. Ultimately the goal of using CVD is to reduce the risk to Consumers of that software.
It is important to thank reporters for taking the time to find the developer and go through the project's process. One of the ways to do that is to make sure the project's process of issue intake is as discoverable, smooth, and low-friction as possible. Additional methods to thank reporters will be shared later in the Response Process).
When someone finds a security vulnerability and reports it to your project, their main goal is to get you to fix the vulnerability to help make your project more secure. Thus, they are looking for a project maintainer or contributor to patch the vulnerability so that when users update to the most recent version of the software, they are no longer vulnerable to the software flaw that was reported.
Often, vulnerability reporters may want other ancillary things to happen. Examples of normal, reasonable things a finder may ask for include:
- CVE issuance: Security flaws in specific commercial or OSS products or projects are often issued CVE numbers to refer to a specific vulnerability in a specific version of a product. CVE numbers help users of systems learn about security risks in specific versions of those systems so that they can choose to update to patched versions instead. Security researchers may seek to obtain a CVE ID for their reported vulnerability from a recognized CNA. This is normal and safe to do and will not cost you anything or harm your project. If you want, you can learn more about the CVE program here or on the Common Vulnerabilities and Exposures wikipedia page.
- Acknowledgment: Giving vulnerability researchers credit for their contributions is a typical way to acknowledge the value they have offered to your project. Typically, this is included in the patch notes when an update is issued.
- Ability to issue a Technical Advisory: Sometimes security researchers will seek to publish a Technical Advisory on their (or their employer's) website on the day on which you release the patch which fixes the reported security flaw(s). A Technical Advisory serves to both improve user awareness of the security updates to encourage them to patch and mitigate risk, as well as to profile the researcher's finding. Co-issuance of Advisories on the release date of fixes for security flaws is common practice across both open-source and commercial software. A typical Technical Advisory will include a summary of the report, the impact of the vulnerability, details about the vulnerability, recommendations to the developer and/or end users, and a timeline of communication milestones between the vendor/affected project and the vulnerability reporter.
Security researchers who report vulnerabilities to your project unsolicited (unless as part of an official bug bounty program that you may choose to run) should never ask you for money in exchange for details about security findings that they are reporting to you.
It's important to be ready for a vulnerability report before you get a report. Preparing to receive a vulnerability report is required to get a CII Best Practices badge. This section explains how to get ready; the following sections will walk through how actually to handle a vulnerability report.
Every project has different goals, capabilities, and norms. A critical first step to addressing security vulnerabilities is to share with downstream consumers or collaborators how the project will take in vulnerability reports and what a reporter's expectations of communication and response should be. Larger, more experienced projects will possibly have a dedicated team (see VMT below) of security specialists that have experience within open source of fixing security bugs. Small-to-medium-sized projects most likely will not have access to these trained professionals, or that type of work is simply out of their scope. It is important to understand that all software will have defects; some of those defects will have security-impacting aspects (impacting data's confidentiality, integrity, availability, or auditability). A small bit of preparation at the start of a project can ensure the team talks about and documents what they will do when these types of reports arise.
Processes and tooling do not need to be complex and burdensome to the maintainers and contributors, but the following should be clearly documented by the project to ensure a reporter can get the bug report to the responsible developers:
- How to contact the team about a potential security vulnerability
- The vulnerability report can be kept private until such time the project decides to share more broadly
- The reporter's expectations on communication/collaboration around the issue are properly set
An example template for open source projects for a basic security policy can be found within this OpenSSF repository. Projects can reuse it with a few minimal changes or borrow any elements useful in their existing documentation.
This guide will address these topics and other options that more experienced teams could follow in this section.
To keep security issues on a "need to know" basis while they're being resolved, you need a small team who can be available to respond to issues and can be trusted to keep them confidential while they're being addressed. This small team that manages vulnerability reports is your vulnerability management team (VMT).
The VMT's primary responsibility is coordination: they will be the reporters' point of contact throughout the process, keep the reporters informed (if they'd like to be), and keep the security issue moving through the process. You will want some team members to be familiar with the project's release mechanisms and security, but that does not need to be everyone. Part of "coordinating" is knowing when and who to bring in when you need help beyond your team's knowledge.
If you have a small project (1-5 maintainers), its maintainers may be the VMT. In a larger project, you may want to split this work amongst your maintainers or a subset of them.
Recommendation: For larger projects, select 3-7 team members with experience in security, engineering, and program management. For smaller projects, select trusted maintainers (this may be all maintainers). Where practical, you’ll want to divide the responsibilities among maintainers. Make sure at least 2 team members have the correct permissions to generate security issues/advisories on your development platform (i.e., Admin on GitHub for Security Advisory).
Create an email alias for these team members to privately collaborate on the issue. Ensure this is distinct from your report intake alias. Never make this team coordination alias "security@[yourdomain]," since that is a convention for intake.
You'll need an easy, obvious way that vulnerability reporters can contact your project (specifically: your project's VMT) to report security flaws that they have found in your code, and you need to tell reporters what that is. Typically, how to report a security vulnerability is described in a project's security policy or SECURITY.md file on GitHub, and/or on an easy-to-find page on the project's website. The goal here is: "make it obvious."
In this section, we explain where you should put the instructions for how vulnerability finders can contact your project to report security vulns. In the next section, we explain what mechanisms or communications methods you may wish to use to actually receive vulnerability reports.
On GitHub, you can write a Security Policy that includes instructions for how you want researchers to report discovered security vulnerabilities to your project. GitHub Security Advisory is the feature that displays the "Security Policy" and "Security Advisory" information in the top-level security tab on a GitHub repository. To populate the "Security Policy" field, you will want to create a SECURITY.md
file in your root, docs, or .github folder. (GitHub documentation: Creating a Security Policy) Whatever you decide, our recommendation is to also put a link to the SECURITY.md
in your README
. The Security tab isn't obvious to everyone; the README
puts this information front and center. (Just putting disclosure information in the README
will not populate the Security tab.)
On your project's website or similar, you can write a Security Policy that includes instructions for how you want researchers to report discovered security vulnerabilities to your project, or you can simply include clear and specific contact information to "report a security vulnerability". We recommend that you put your security policy in the same place where you document how to report issues, with a distinct callout for "Reporting a security issue." If this page is not a top-level page, we recommend also adding a link to this documentation on a landing page, a security features page, a contact page, or other prominent, heavily-trafficked page. If you have site search, "vulnerability," "report security," and "security issue" are common keywords that you'll want to incorporate. If you have a README.md
or SECURITY.md
page, put it there (and if you don't, consider adding them).
Your intake method for vulnerability reports will depend on how you plan to develop and test your patch privately. Whatever method you pick, clear documentation and consistency across the vulnerability management team (VMT) will help you stay organized and responsive. (E.g., if half your reports come in via email and half come in through Launchpad security issues, that is a recipe for miscommunication.) Inevitably you'll receive a report through the "wrong" method; just kindly help the report get into your standard workflow and keep going.
The vulnerability reporter is doing you a favor; don't add more steps than absolutely necessary. In the spirit of this balance, our recommendation is that using email for intake is okay, and should at least be provided as an alternative.
If your project is on GitHub, we recommend enabling privately reporting a security vulnerability. This is an easy-to-use mechanism, and ease of use is key.
We recommend that you also follow the directions for another service. In particular, include email address(es) people can use for reporting instead of the GitHub private reporting mechanism.
You may choose to use GitHub Security Advisories for disclosure without enabling private reporting, but if you don't explicitly enable private reporting, users will be unable to report to you privately on GitHub.
If you choose to use GitHub Security Advisory for private patch development, here's how we recommend supplementing it.
Your Security Policy should instruct reporters to email the VMT with a vulnerability report (see SECURITY.md
templates). The VMT will then open a Security Advisory and add the reporter as a collaborator (see GitHub documentation on GitHub Security Advisory). It is also appropriate to email that alias for questions about the vulnerability disclosure process.
If you are using an issue tracker to track security vulnerabilities (e.g., Launchpad, Buganizer, or Bugzilla), your Security Policy should instruct reporters to open a security issue in that tracker. It is also appropriate to email the VMT alias for questions about the vulnerability disclosure process or if there are problems opening a security issue.
If you do not have an issue tracker with a security issue feature, you need an alternative method for intake. Your intake solution should restrict access to the content of the messages to verified identities (or at least verified email addresses), to counter being overwhelmed with spam. However, this solution also has to be accessible and have low friction. Typically this will be an email address.
Even if you have an issue tracker, state an email address available as an alternative intake method so you can help make sure issues get to you.
We recommend using an email service for accepting vulnerability reports (such as security@PROJECTNAME) that supports hop-to-hop encryption. In addition, encourage users to use email systems that support hop-to-hop encryption, and to use HTTPS if using a web-based email client. Widely-used email services already support hop-to-hop encryption, including Gmail, Outlook.com, Oath, and runbox.com.
The preferred standard for hop-to-hop encryption is Mail Transfer Agent Strict Transport Security (MTA-STS). MTA-STS requires encryption. An alternative is STARTTLS. STARTTLS attempts to switch to encrypted communication, and thus counters passive monitoring, but because it is opportunistic it is weak against active attacks. Use at least one; don't allow vulnerability reports to go unencrypted across the Internet.
Hop-to-hop encryption isn't as strong as end-to-end encryption, but many users find it too difficult today to use end-to-end encryption when using email. Organizations are welcome to also support end-to-end encryption with email (e.g., OpenPGP), and if they do support it, researchers are welcome to use them. We believe it's more important to get the vulnerability report.
Later sections will cover how to determine if something is a security issue, a regular issue, and something that you will patch privately and then disclose. If it is a security issue and you will be issuing a patch, you will need a way to develop and test your work privately. If you test a particular patch in public, an observant attacker may see and exploit the vulnerability before you're able to issue a patch.
You have a decision to make: Will you use the GitHub Security Advisory feature for private patch development? Based on the feature set of GitHub Security Advisory at the time of writing, our recommendation is that you typically use the private development features to generate your patch there if you are a project using GitHub.
Pros: Keeps all development within one platform, easy to add external contributors (e.g., the reporter or other experts who can help with patching), when the vulnerability is disclosed, it is easy to flip the work from "private" to "public."
Cons: One problem with private development is that you will not be able to get widespread feedback on your change; as a result, the fix may not truly fix the problem or have bad unintended consequences. In particular, it can be a challenge to share the proposed changes with many people without making them public using this approach.
If the vulnerability is already publicly known and widely exploited, there's no advantage to trying to keep things private. If the vulnerability is already widely known publicly, it is probably better to develop the fix in public as well (where you may get more people to help and more feedback).
Also, at the time of writing, private forks created as part of GitHub Security Advisory do not have access to integration like CI systems (see documentation), so you will need to run tests locally.
If you need to test against hardware or systems not already included in your testing suite but available somewhere else (for example, internally at your company), it may be faster to fork the project, develop, and test outside of GitHub's private branches. However, this does introduce the challenge of keeping your internal fork up with main while you develop a patch and restrictions on who can help.
Running private mirrors can be done, but we do not recommend this as the default. If you run a private mirror for developing and testing security patches, you will want to have this set up and operational before you have a vulnerability report.
Many issue trackers can separate security issues from regular issues. Whatever tracker you select, the following features are strongly recommended for your vulnerability reporting system:
- A changelog is available for each ticket
- Membership can be restricted, and member identity is compatible with multi-factor authentication
- Private issues/tickets can be made public after disclosure
- Issues and coordination communication are not ephemeral
- The reporting process does not require the user to make an account with a service that is not already used in the corresponding project or is not a commonly used developer tool
CNAs (CVE Numbering Authorities) are organizations that can assign CVE numbers to new vulnerabilities. CNAs have various scopes and do not issue CVEs outside of their scopes. (e.g., While the (fictitious) SpeakerCompany uses open source software in their products, their scope could be restricted to vulnerabilities only found in SpeakerCompany software, and they would not handle a CVE request for an upstream issue.) There are many CNAs; the only "pre-work" for the VMT is to know of at least one CNA whose scope covers your project and who you will go to first for a CVE assignment. MITRE, the organization that manages CVE administration, is also a "CNA of Last Resort" for open source projects and can be used if no better scope is available.
TL;DR: Embargoed notification requires careful administration and management, adds additional responsibility for the VMT, and adds time to the disclosure process. Unless your project has a significant vendor ecosystem, embargoed notification is probably not necessary.
When companies offer your project as a managed service or your project is critical to their infrastructure, and their infrastructure has the potential to expose users, it is probably appropriate to have an "embargo list." An embargo list is a read-only announcement list whose membership is restricted to particular users. Depending on the nature of your project and the vulnerability, a user of a managed service might be dependent on their provider to take action to reduce that user's exposure. Before the public disclosure, a notification under embargo gives service providers time to prepare so they can patch quickly after the public disclosure and reduce the time their users are exposed.
Embargoed notification is not about avoiding PR issues or providing high-profile users with preferential treatment; it is about protecting users from damaging exploits by giving preparation time to the distributors and providers that control those users' systems. It can also allow distributors to test and qualify the patch across diverse environments and report problems that can be fixed before public release. This extra testing validation can be valuable for complex patches. Make sure someone on the VMT is monitoring for replies to the embargo announcement.
Using an embargoed notification is not without risk. An embargoed notification expands the number of people with early awareness and adds extra time between when the vulnerability is discovered and when it's patched. As the Project Zero team states, "We have observed several unintended outcomes from vulnerability sharing under embargo arrangements, such as the increased risk of leaks, slower patch release cycles, and inconsistent criteria for inclusion." When deciding to use an embargoed notification, consider the severity and exploitability of your vulnerability, the patching complexity (does the provider actually need the time to prepare, or this is an easily rolled out patch?), the resource cost in running and managing an embargoed notification cycle, and the breadth of your embargo list. For every vulnerability (irrespective of its severity), not all users are equally affected. For instance, if the vulnerability exploit is on network vector and some users have strong network security controls in place, they might not be affected by it. So the notifications can include some details on the code path that is being exploited and in which security settings it is affected.
If an embargo list is relevant to your project, you will want to create a restricted, read-only announcement list that your VMT administers. The VMT is responsible for approving access requests and maintaining an accurate list (e.g., removing outdated members), but it is the provider's responsibility to request access to your list. List the requirements and directions for requesting access in your security documentation.
The more you have pre-written, the less there is to do when you have an issue to respond to. See the Templates
directory for security policy (SECURITY.md
), embargoed notification, and public disclosure templates.
It can be beneficial to both reporters and users to publish what your project does when it receives a security issue and if you have a time-based disclosure deadline along with the usual time. This helps reporters follow the process along and helps users understand how an issue was handled when they see a disclosure. If you follow this process, just link to this document! Users and reporters may also want to know if a VMT is active; a regular status update can help with this.
See Runbook.md
for step-by-step directions on the vulnerability response and disclosure process
1 Immediately acknowledge receipt of the issue
Quickly acknowledge that you have received their issue. At this point, you likely haven't assessed the issue; you're just letting them know that you're on it. This should be done quickly, say within 1-2 days.
2 Assess the issue
To assess if an issue is a vulnerability, you will need:
-
Documented steps the reporter took that created the behavior
-
Any relevant information about systems, versions, or packages involved
Not everything reported as a vulnerability is a vulnerability. Generally, something is a vulnerability if it compromises data confidentiality, integrity, or availability. This may happen by enabling remote code execution, elevated permissions, or unintended access, but what separates a vulnerability from other unwanted behavior (a non-security bug) is a compromise in one or more of those areas.
Intentional design decisions that do not have "optimized" security are typically not vulnerabilities. A suggestion for better security is not the same as a vulnerability. For example, it is good to harden software so that defects are less likely to lead to a vulnerability or reduce its impact. Still, suggestions on how to harden software are not themselves vulnerability reports. Vulnerabilities create a situation where something is not working as intended and creates unintended access or lack of service to data, systems, or resources.
Assessment | Response |
---|---|
Working as intended | Let the reporter know this is the intended behavior. If they think this behavior could be improved, they can file a feature request. Close the security issue. When responding with this assessment, you should explain why you arrived at this conclusion, in case the original report was unclear and the VMT has unintentionally misunderstood the original report. |
Bug | Let the reporter know this is unwanted behavior but not a security issue, and ask them to refile this as a bug. Close the security issue. |
Feature request | Let the reporter know this is the intended behavior. If they think this behavior could be improved, they can file a feature request. Close the security issue. |
Vulnerability | Let the reporter know that you have confirmed this unwanted behavior creates a security issue. Proceed with the process. |
3 Discuss embargo period with vulnerability reporter
Vulnerability reporters will typically state the embargo period they're willing to accept. The "embargo period" is when the vulnerability report is kept from the public, enabling the project to respond, fix, and publicly disclose the vulnerability themselves. This may or may not be the embargo period requested by the project. There are different recommendations and norms about embargo periods among different groups:
- The distros mailing list prefers less than 7 days, with an absolute maximum period of 14 days.
- CERT/CC discloses vulnerabilities to the public 45 days after the report, even if no fix is available, with a few exceptions.
- The CERT® Guide to Coordinated Vulnerability Disclosure (August 2017) says that "an acknowledgement timeframe of 24-48 hours is common for vendors and coordinators, while 45-90 days seems to be the normal range for disclosures these days."
- Google's application security policy uses a 90-day deadline
Note that 90 days is the longest embargo period entertained by various groups as a default; many default embargo periods are shorter. More or less time may be appropriate depending on the issue's complexity, whether or not the issue is being actively exploited, or problems in patch rollout. If you believe the vulnerability reporter has not given your project enough time, now is the time to make the case to the vulnerability reporter about why a longer embargo time is needed. The CERT® Guide to Coordinated Vulnerability Disclosure recommends that both suppliers and reporters "treat policy-declared disclosure timeframes as the starting point of a negotiation process rather than a hard deadline."
What's critical is that the embargo time is agreed on by all parties (if possible). It's also essential that there be ongoing communication with the vulnerability reporters. Most vulnerability reporters are happy to provide some extra time if there is clear ongoing evidence of effort, continuous communication, and a good rationale for that extra time.
Embargo periods (aka "days of risk") represent a trade-off. Every day in embargo is another day where attackers may discover the vulnerability and exploit users of the software. In contrast, the users and public remain unaware of the dangers of the vulnerability. However, once the embargo ends, attackers who might not have learned of the vulnerability can now quickly learn of the vulnerability and exploit it. The shorter the time between the vulnerability discovery and a proper fix, the better. Past bitter experience has shown that many suppliers simply leave their users exposed to dangerous vulnerabilities without a deadline, so having no deadline suggests to many vulnerability reporters that the project is actively opposed to securing the software they supply.
4 Create a patch for the issue
Let the reporter know you have confirmed the issue, begin developing a patch, and request a CVE entry if they have not. Ask the reporter if they would like to be involved in the patch development process. Using your private development and testing tooling, develop a patch and prepare (but do not cut) a release.
In your assessment process, you should have identified what versions are affected. As you prepare your patch, note backward compatibility and upgrade requirements (for example, v1.0.0 is affected, but the patch is not compatible, and users will need to upgrade to v1.7.0 or above to apply the patch). You will need to communicate these details in your disclosure announcements.
When creating a patch, it is vital that it be easy to users to update (at least for the users who are using the most recent release of the software). Your change should not require the users to use a different API, or eliminate user functionality unless that is absolutely necessary. Where practical, the change should involve a simple update. If such changes are absolutely necessary to fix the vulnerability fully, minimize the user impact, and look for ways to mitigate the vulnerability without applying the change (since many users will be unable to install such changes in a timely way).
For issues in patching, see the Troubleshooting section of the guide.
5 Get a CVE for the issue
Ask the reporter if they would like to be involved in writing the CVE entry and if they would like to be credited in the entry. Recognition is one of the many ways we thank reporters! It is inappropriate to omit credit to vulnerability reporters unless the reporter does not want the credit.
Go through your identified CNA to have a CVE number reserved and submit a description. Let your CNA know you are working on a patch and, if applicable, will be doing embargoed notifications before public disclosure. Keep your CNA up to date on your public disclosure date so they can coordinate listing your CVE entry.
6 Decide the date of public release
The advantage of getting a CVE for a vulnerability is that it provides a path to notify users of the vulnerability and verify that it is fixed. Many organizations specifically track CVEs, so that even if they don't see your vulnerability announcement, they'll see the CVE report.
We also recommend creating a JSON entry using the OSV schema and publishing this somewhere accessible over HTTP. This schema enables encoding machine readable version information that makes it easier for users to match the vulnerability to their versions of your project.
7 # (If applicable) Notify providers under embargo
In many countries people often have days off on Fridays, Saturdays, and/or Sundays. So if the vulnerability is not publicly known (e.g., it's not being exploited), it's often better to make the public release on Monday, Tuesday, or Wednesday. In addition, try to avoid releasing a vulnerability fix on a widely-observed holiday. This gives people some time to do the update during their workday.
8 (If applicable) Notify providers under embargo
Embargo notifications are sent anywhere from 1-30 workdays before the intended date of public disclosure. This timeframe depends on the severity and exploitability of the issue, the complexity of the patch, and the type of providers your project is used by (can the providers feasibly qualify and patch in 5 days? 10 days?). Also consider holidays and significant events that could impact the provider's ability to prepare and adjust your dates accordingly (e.g., if retailers heavily use your project, don't expect them to be able to prepare over the US Black Friday shopping days).
Your notification should include the CVE id, issue description, reporter credit (if applicable), affected versions, how the patch will be made available, and the public disclosure date. See corresponding template examples in the guide.
9 Cut a release and publicly disclose the issue
On the day of public disclosure, publish your disclosure announcement (see templates). If using GitHub Security Advisories, "publishing" your private Security Advisory will add it to the "Security" tab. If you are not using GitHub Security Advisories, publish the announcement to your release notes or security bulletins. If you have CVE ids for the vulnerabilities fixed, include the CVE id(s) of the vulnerabilities that were fixed in this release.
It's also recommended to send the announcement to appropriate mailing lists for your community (i.e., a security-announce@ list and even a general mailing list for high-impact vulnerabilities).
10 Disclose quickly
You might choose to briefly withhold details about how the vulnerability can be exploited, hoping that this will give users a little more time to update before attackers begin exploiting the vulnerability. This only makes sense if it's not obvious to attackers how the vulnerability can be exploited, and in most cases, attackers will find it obvious. In addition, attackers can usually review changes made to software (in source or executable form) and easily determine an attack. Thus, withholding detailed information can only be helpful for a few days at most, even in the few cases where it helps at all.
Sometimes, the coordinated vulnerability disclosure process does not go smoothly. In this section, we offer advice for a few potential challenges you may encounter.
We're not sure if this is actually a security issue
If you receive a vulnerability report but do not understand whether it is a security issue, you should ask domain experts in your VMT, or you can try to ask additional questions of the person who reported the vulnerability. If their report is too brief or unclear, you may wish to ask things like:
- In which lines of code is the vulnerability located?
- How specifically does this vulnerability create a security risk?
- If we leave the code as it is, what could an attacker ultimately do?
- How can we replicate the issue? / Do you have a working proof-of-concept that you would be willing to show us?
- What would we have to do to fix the issue? (Note: You should still do your best to validate the security of every contributed patch - regardless of its' source - to seek to prevent both accidentially insecure or intentionally malicious commits).
Sometimes, vulnerability researchers will be willing to invest additional time to help you better understand the issue so that you can fix it. If they do this, be respectful of their time and collaborative and straightforward in your approach so that you can work together to get the vulnerability remediated.
Our reporter isn't very responsive
After the initial report, your reporter will choose how responsive to be (that's the "coordination" part of Coordinated Vulnerability Disclosure). If you receive a report that you cannot reproduce and have tried multiple times to reach the reporter, send them a polite, final note that you were not able to reproduce the issue and will not be issuing a security advisory. Encourage them to reopen the issue if they can reproduce it in the future.
Patch development isn't going well
If you're struggling to develop a patch that fully resolves the issue, you have a couple of options:
-
Get more help. It is okay to expand the people working on an issue beyond the VMT when you struggle to create a fix. Is there a project contributor who has particular knowledge of the affected area? Do you know someone who specializes in this security area? (e.g., networking security, container security, etc.) Do VMT members have resources at their company (e.g., vuln response teams) who can help?
-
Patch partially (break the exploitation chain). If you've gotten more help, the embargo period is about to end, and you don't have a complete fix yet, a patch that breaks the exploitation chain before the public disclosure time is preferable to no patch. This does not mean you stop working on a complete fix after disclosure, but that you release the solution you do have. In this context, "break the exploitation chain" means creating a partial fix that makes the exploit much more difficult. In this option, you must communicate and document that this patch does not resolve the issue entirely. Users must understand their exposure level even after patching. When you have a comprehensive fix, remember to add updates to past announcements to point users to the latest information. (For example, your release notes for the comprehensive fix could say, "Further security improvements addressing $CVEID.")
-
Disclose without a patch and document it well. If an issue is unresolvable, it is better that users know than not know. "Security through obscurity" is a weak defense in vulnerability management. Any existing vulnerability can be found and exploited by bad actors. Document the issue well, including any related workarounds for common environments, and continue to work on it in public.
Someone publicly disclosed a vulnerability without working with us
Whether it is found in a research paper, a media article, a security conference preesentation, or on social media, if someone publicly discloses a vulnerability in your project that you had no prior awareness of, the best thing to do is treat it as a regular project issue (it is, after all, already public) but assign it high priority and communicate with your users, particularly if it's a publicized or critical issue. Let them know you're aware of the issue, how it's being handled, and where they should watch for updates. Addressing an issue of this type publicly removes a significant part of the communication burden, as it allows others to find this information and not have to contact the VMT.
We believe the vulnerability is being actively exploited in the wild
Open source software is powerful, and unmitigated security flaws in OSS projects can have real-world impact on people and organizations around the world. The exploitation of security vulnerabilities in open source (and closed source!) projects can lead to the compromise of downstream dependents and both direct and indirect users of your software. Examples of these so-called "supply chain" attacks have been well-documented in recent years in mainstream media.
If you have reason to believe that a vulnerability in your code is being actively exploited by threat actors, it is important to quickly notify users and issue patches that remediate the security risks. We aim to update a future version of this guide with additional resources for OSS project maintainers facing this challenge.
Thank you to the wider security and open source communities whose work informed this guide, including the Google Open Source Programs Office and Google security teams, the OpenStack Vulnerability Management Process, Project Zero's disclosure process, and the Kubernetes security and disclosure process.