Skip to content
This repository has been archived by the owner on Jun 10, 2024. It is now read-only.

Contributor Experience: Plan

John R Barker edited this page Sep 7, 2018 · 15 revisions

Contributor Experience Plan

The Contributor Experience had a wide remit, though is currently focusing on the following:

  • Theme 1: Build up Working Groups
  • Theme 2: Make it clearer how to develop modules
  • Theme 3: Reduce issue/PR backlog

Discussion welcome in #ansible-community or in Contributor Experience Etherpad

Objective: This is about scale and empowering others to do things themselves

A well functioning group should be able to:

  • Welcome new members into the group
  • Provide a variety of items (not just coding) for people to get involved with
  • Keep on top of their backlog
  • Set direction

Goal: Find out if we are building up and maintaining active groups.

aka If we don't measure, how do we know if we are improving

Interested in participation not just people idling

  • Unique people active in IRC meetings
  • Number of people active on agenda issues
  • How do people find out about the groups
  • Why do people stay
  • Why do people leave

Goal: Make life easier

  • Asking a wider range of people for pain points allows us to spot common issues and address them
  • Review previous Contributor Summit docs
  • Important to get input from new contributors

The various groups have found things that work for them, we should review, document and roll out for other groups what works. If something doesn't work then analyse why not

Goal: Showing progress keeps motivation

  • Motivates existing and new people
  • Such as AWS's boto3 porting and testing monthly stats

Goal: Ensure that new people that want to get involved have something to help with

  • MUST include non-Python tasks
  • MUST include some well defined simple items

On hold till till above items have been done, we don't want to invite more people till the groups are in a better state

On hold till till above items have been done

Series of blog posts, one per working group, showing what they've achieved and how to get involved.

Objective:

  • Docs: dev_guide reorg to make it easier to create and write content (acozine is working on this)
  • Docs: Real examples on how to document your module
  • Docs: fix module checklist
  • Docs: How to write a good integration tests
  • Continue to spot common issues with new PRs and doc/automatically test them

(Will partly be addressed by Theme 1)

Where ever modules live (ansible/(ansible, modules-core, ...) there will always be issues and PRs raised. Understanding how the backlog builds up and empowering people to reduce it is key.

The strategy for for this is:

  • Use Plan-Do-Check-Adjust
  • Use quantitative measurements where possible to drive Plan-Do-Check-Adjust
  • Make continual gradual improvements
  • Break the PR workflow into individual stages and attack the individual stages
  • PR Created
  • ansibullbot A adds need_triage
  • ansibullbot notifies maintainer(s)
  • CI is run, PR status updated
  • Member of Core does initial triage
  • Main workflow
    • The following may happen multiple times and in any order
    • PR updated so CI is green
    • Maintainers (or others) add review comments that need addressing
    • Maintainers (or others) add shipit
    • ansibullbot adds label:shipit and
  • ansibullbot potentially automerges based on rule set
  • Person with Commit powers merges PR

Given the size of the Issue and PR backlog we use GitHub Labels to represent:

  • What the issue/PR represents: bug, feature
  • Code affected new_module, plugin/{action,callback,lookup,...}, etc

Some of the key labels are:

  • needs_triage Issue or PR has just been created and a member of the Core Team hasn't reviewed it yet. Triage is a very quick process
  • bug - Bug fix (PR) or report (issue)
  • ci_verified Identify pull requests for which CI failed
  • feature adds feature (PR) or feature request (issue)
  • new_module Identify pull requests adding new module
  • support:core
  • support:network
  • support:certified
  • support:community

We also use labels for Working Groups (aws, azure, network, windows, etc) See the almost full list of labels for more details

  • New contributors that receive feedback on their first PR sooner are more likely to contribute again Mozilla research
  • CI Issues
    • Are issues found valid
    • Are the error messages obvious
    • GitHub Checks API should help this - Waiting on Shippable and Zuul
  • Spot trends, do RCA, fix at source

Thanks to Ansibullbot we know if a PR is from a new_contributor

Analyzing new_contributor's gives us a good insight into how clear our docs, process and tests are. This is important as we often get sidetracked by regular contributors that have been through these issues before and overcome.

We can count/track many things, though we need to ensure that:

  • Can we influence what we track, if not is this useful - Without
  • Some metrics as just "general trends"

Aim

  • How can measure the "new contributor experience" in a quantitative manor to allow us to identify bottlenecks in the process. We can then change part of the workflow and see the effect that has had.

Definitions

  • new contributors: GitHub users that haven't had any PRs merged into ansible/ansible
  • experience: The workflow process that the contributor goes though from PR creation to PR being merged

We need to be able to track the change (positive or negative) that's occurred since the workflow was updated. There will not be one change in workflow, though a steady stream of improvements and tests. This means that the results need to be linked to a date (i.e. horizontal axis is date). FIXME What's the correct way of phrasing this?

Possible Metrics

"Number of days from PR being open to merged/closed" - Gives us an overall number, though no idea what the bottlenecks are

To identify the bottlenecks we need to define the workflow and track the progression though:

  • PR open to close (not merged)
    • A PR being closed at triage means the PR is invalid. This may indicate bad PRs (duplicate, already fixed, not an applicable fix/feature)
  • Days to first clean CI run
    • Indicates how understandable the CI failures are, as well as how easy to fix they are.
    • Improvements to the CI error messages (and move to GitHub Checks API) should make the errors easier to understand and therefore we'd expect a reduction in time
    • Would improving the wording for the link to failing tests reduce the duration
    • Would improved documentation for certain CI failures help this?
    • Looking at label:new_contributor and PRs that are failed CI for longest could indicate the types of CI failures that are harder to a contributor to understand. Addressing these could help reduce the longtail
    • Need to be mindful that the PR may never go CI green - Need someway of representing that differently days of red = days PR has been open.
  • Days to first review
    • How long till a human has reviewed the PR
    • Does a human review within a few days (rather than a few weeks/month) keep the contributor engaged/motivated
      • This maybe more complex to analyse as we need to use this bit of data (days-to-first-review) along with others, such as days-to-merge.
  • repeat contributions

All of the above multiplied by:

We expect (FIXME WHY) that different types of PRs would have different patterns/duration to progress through the workflow. Therefore we should track these individually as:

  • The bottlenecks maybe specific to a certain type of PR
  • The workflow fixes maybe specific to a certain type of PR

The rough matrix would be:

  • Type: bugfix, feature
  • Code type: Module, plugin_type (callback, lookup, inventory, etc)
  • Support: Core, Network, Community
  • SIG: If the PR has been tagged with a specific working group list of working groups (SIGs) - lower priority

Possible results and resolutions

We may find some trends that depend on the above matrix, such as:

  • Features are merged quicker than bugfixes
    • Is this because the features are net-new and couldn't cause regressions
    • Are people naturally more interested by features than bug fixes
  • Are their groups of bug fixes that need reviewing and merging as a group
  • Are maintainers not being notified for all changes (ie non-module PRs are not being notified)

Dumping ground of other thoughts not directly related to another section:

  • Number of label:needs_triage over time - Is Core keeping up with Triage

Via BOTMETA and a Module's author: we have a reasonable idea of who to notify when an Issue or PR.

Before we add more maintainers we need to ensure that the existing process is work, ie that "pings" are being responded to.

  • Review label:deprecated: >https://github.com/ansible/ansible/labels/deprecated>_ Check Bot logic (auto close feature PRs)?

(ARchived) Working groups

Working groups are now in the Ansible forum

Ansible project:
Community, Contributor Experience, Docs, News, Outreach, RelEng, Testing

Cloud:
AWS, Azure, CloudStack, Container, DigitalOcean, Docker, hcloud, Kubernetes, Linode, OpenStack, oVirt, Virt, VMware

Networking:
ACI, AVI, F5, Meraki, Network, NXOS

Ansible Developer Tools:
Ansible-developer-tools

Software:
Crypto, Foreman, GDrive, GitLab, Grafana, IPA, JBoss, MongoDB, MySQL, PostgreSQL, RabbitMQ, Zabbix

System:
AIX, BSD, HP-UX, macOS, Remote Management, Solaris, Windows

Security:
Security-Automation, Lockdown

Tooling:
AWX, Galaxy, Molecule

Communities

Modules:
unarchive, xml

Plugins:
httpapi

Wiki

Roles, Communication, Reviewing, Checklist, TODO

Clone this wiki locally