Skip to content

Latest commit

 

History

History
122 lines (109 loc) · 5.78 KB

File metadata and controls

122 lines (109 loc) · 5.78 KB

March 9, 2020

Attendees

  • Alex Mullans (GitHub)
  • Nico Waisman (GitHub)
  • Eva Sarafianou (Auth0)
  • Crystal Hazen (HackerOne)
  • Alex Rice (HackerOne)
  • Eric Brewer (Google)
  • Steve Dower (Microsoft/CPython)
  • Hauwa Otori (GitHub)
  • Marcin Hoppe (Auth0 / Node.js Ecosystem Security WG)s

Agenda

  • Areas overlapping with other WGs
    • Expected outcome: where we see our WG as a driver and where do we contribute?
  • Which of the OKRs to focus on first?
    • Expected outcome: identified project(s)
    • Objectives (recap from kick-off):
      • Vulnerability reporting (researcher <3 maintainer)
      • Coordinated disclosure (maintainer <3 users/world)
      • Security score (world <3 maintainer) -> Security of Open Source Projects WG
      • Prevent known types of vulnerabilities from ever being introduced -> Security Tooling WG
  • Update from CVE Summit
    • Expected outcome: update WG on some updates to the CVE data format and submission process that may make vulnerability disclosure easier and better
  • Meeting cadence
    • Expected outcome: next meeting date

Aggregated notes

Raw notes

  • Need to define “active” projects
    • This could be an opt-in from the maintainer
    • Or stats-based e.g. # of stars, forks, etc.
  • Google is introducing a two-tiered process
    • High-volume and automated (e.g. OSS fuzz)
    • Low-volume and human-involved
  • Defining a shared set of metadata
    • Action: share today’s formats (Google, NodeJs, GitHub)
    • Can we go down to the level of a particular method call/endpoint when talking about a vulnerability?
  • Two problems: researcher -> maintainer, maintainer -> users
    • Am I actually vulnerable? (code paths that had issues)
    • What is this about? (package identity, package version, commits that intro’d the vuln and fixed it)
    • Formal process to add data to existing CVE?
  • Grow-up plan
    • Get the right data into the formats we have today (CVE)
    • Crowd-sourced database
  • Glean shared set of metadata requirements (fields and field types)
  • Get that data into CVE (and possibly some extension format)
    • Is there a common API format for exchanging this data?
  • What is a well-formed vulnerability disclosure (from a researcher)?
    • How can we exclude non-reports quickly?
      • An accurate impact statement
    • Length and detail may be indicators of seriousness
    • Mentioning “bug bounty” causes an ignore in projects that don’t have a bug bounty
    • Can it come with a working example/repro steps/prereqs?
  • Compare GitHub vuln reports and HackerOne vuln reports
  • Collect the right data.
  • Guess priority and have automated triage.
  • H1 encounters 2 personas: one that prefers the least friction (and thus the most reports), or one that prefers pushing that work down to the researcher. Orgs with more resources tend to be the former.

Example vulnerability report

  • Asset
  • Weakness
  • Severity
  • Title
  • Description
    *   # Proof of concept
    *   ## Summary:
    *   [add a summary of the vulnerability]
    *   ## Steps To Reproduce:
    *   [add details for how we can reproduce the issue]
    *    1. [add step]
    *    1. [add step]
    *    1. [add step]
    *   # Version if applicable
    *   # What prerequisites do I need to have installed to reproduce the issue?
    *   # Supporting Material/References:
        *   [list any additional material (e.g. screenshots, logs, etc.)]
        *   * [attachment / reference]