So you'd like to contribute to the WebSiteOne codebase? That's wonderful, we're excited to have your help :-)
Please do come and say hello in our Slack chat. You can get an invite by signing up at AgileVentures or emailing [email protected]. We have weekly meetings to coordinate our efforts and we try to do planning poker voting on tickets before starting work on them. Feel free to join any AgileVentures daily scrum to ask questions, to listen in, or just say hi :-)
Getting set up with the system on your local machine can be tricky depending on your platform and your devops skills. We can provide prepared C9 instances if you want to get straight into the coding :-)
This describes how to contribute to WebSiteOne: the tools we use to track and coordinate the work that is happening and that needs to happen. This also describes the workflow -- the processes and sequences for getting contributions merged into the project in an organized and coherent way.
First be sure that you've set up your development environment following all the steps in Setting Up for Development on WebSiteOne (Project Set Up)
We use ZenHub to manage our work on features, chores and bugfixes.
We keep our code on GitHub and use git for version control.
To get involved please follow these steps:
-
get tests passing (unit and integration tests in
spec/
and acceptance tests infeatures
) -
check that the site can be run manually (locally)
-
(optional) deploy to a remote (e.g. Heroku, drie etc.) and ensure it runs there
- review open PRs on GitHub - leave comments or collaborate if interested
3. Look at what needs to be done on our zenhub project board:
-
look through ready column - feel free to start work, but always interested to hear chat in slack, scrum wherever
-
look through new issues column - feel free to ask about priority, as these are not prioritised
-
look at backlog - if there is an interesting ticket get it voted on in a scrum or do an ASYNC Vote in Slack
Items need to be voted on before work can start. Voting happens in scrums or the weekly meeting (currently Fridays). Note that even without the meetings you can get a vote on any issue you're thinking of working on by using the Async voting bot in the #websiteone slack channel, using the following syntax: /voter ISSUE NAME https://github.com/AgileVentures/WebsiteOne/issues/number
.
e.g.
/voter make a press-kit link in the footer https://github.com/AgileVentures/WebsiteOne/issues/1738
More on how to handle a vote can be found at: https://github.com/AgileVentures/AgileVentures/blob/master/ASYNC_VOTING.md#automated-async-vote
Our default working branch is develop
. We do work by creating branches off develop
for new features and bugfixes.
Any feature should include appropriate Cucumber acceptance tests and RSpec unit tests. We try to avoid view and controller specs, and focus purely on unit tests at the model and service level where possible.
A bugfix may include an acceptance test depending on where the bug occurred, but fixing a bug should start with the creation of a test that replicates the bug, so that any bugfix submission will include an appropriate test as well as the fix itself.
Each developer will usually work with a fork of the main repository on Agile Ventures. Before starting work on a new feature or bugfix, please ensure you have synced your fork to upstream/develop:
git pull upstream develop
Note that you should be re-syncing daily (even hourly at very active times) on your feature/bugfix branch to ensure that you are always building on top of very latest develop code.
Here is how to create and submit a pull requests.
Every pull request should refer to a corresponding GitHub issue, and when you create feature/bug-fix branches please include the id of the relevant issue, e.g.
git checkout -b 799_add_contributing_md
Please ensure that each commit in your pull request makes a single coherent change and that the overall pull request only includes commits related to the specific GitHub issue that the pull request is addressing. This helps the project managers understand the PRs and merge them more quickly.
Whatever you are working on, or however far you get please do open a "Work in Progress" (WIP) pull request (just prepend your PR title with "[WIP]" ) so that others in the team can comment on your approach. Even if you hate your horrible code :-) please throw it up there and we'll help guide your code to fit in with the rest of the project.
Before you make a pull request it is a great idea to sync again to the upstream develop branch to reduce the chance that there will be any merge conflicts arising from other PRs that have been merged to develop since you started work:
git pull upstream develop
In your pull request description please include a sensible description of your code and a tag fixes #<issue-id>
e.g. :
This PR adds a CONTRIBUTING.md file and a docs directory
fixes #799
which will associate the pull request with the issue in the ZenHub board.
This all adds up to a work flow that should look something like this:
- ensure issue has full description of change and has been voted on
- create branch prefixed with id of issue (moves issue into 'in progress')
- create failing test on the branch (acceptance level)
- create failing tests (unit level)
- get test to pass with functionality
- submit pull request with fixes #xyz
- pull request reviewed
- changes to original PR if required
- pull request merged (presence of "fixes #xyz" then moves issue to 'done')
- code moved to staging and checked against production data clone
- code moved to production
We have unit tests in RSpec and acceptance tests in Cucumber. At the start of the project we were doing unit, controller and view unit tests in RSpec, but have since stepped back from that requirement, finding it seems rather brittle. For any new functionality we recommend a simple combination of unit tests in RSpec and acceptance tests in Cucumber, and ensuring that as much logic as possible is moved out of views and controllers into models, services, presenters and helpers where they can be easily unit tested. This allows us to avoid brittle controller and view tests.
We have several challenges with the current acceptance tests. One is that some of the javascript tests fail intermittently, particularly on CI. Partly in an attempt to address this issue we added comprehensive VCR and PuffingBilly sandboxing of network interactions in the acceptance tests. While these caches allow some of our tests to run faster, and avoid us hitting third party services, they can be very confusing to develop against.
The principle is that one should avoid having tests depend on 3rd party systems over the network, and that we shouldn't spam 3rd party remote services with our test runs. However the reality is more complicated. For example in talking to 3rd party Stripe, they've said that they are happy to support test run hits "within reason". Also, a cached network interaction can make it seem like a part of the system is working, when in fact it will fail in production due to a real change to the network service. The action here should be to delete the relevant cache files, re-run, save the new cache files (which VCR and PuffingBilly should handle for us) and then commit the new cache files to git.
The reality is that it is often difficult to work out which are the relevant cache files (particularly if you're new to the project) and it's easy to mis-understand what's happening with the caches. A common reaction to seeing lots of cache files (files in features/support/fixtures/
) when you run git status
is to add them to .gitignore
(which happened on LocalSupport and caused lots of confusion) or simply delete them.
There's a Gordian Knot here which is that we'd like it that if a tests passes on your machine, then it should pass on my machine. However, if the test relies on a 3rd party network service, then all bets are off. With some reliable network services that's not such a big deal, but it can be very confusing. If we just add the cache files to .gitignore
we can get into some very confusing situations where developers don't realise they are running against cached network interactions. Simply deleting the cache files (rm -rf features/support/fixtures/
) and re-checking out ( via git checkout features/support/fixtures/
) is av perfectly reasonable way to get back to a baseline, but you still might be confused about which cache files you should be checking in with your tests.
In the ideal world the develop
branch would run green for you and there would be no extraneous files. Then you add your new test and it's implementation. Once it's all working you will likely have a bunch of cache files. These should be deleted in the first instance since some may be due to erroneous network interactions as you were developing. Assuming you have got to a reliable green test stage you can clean up (rake vcr_billy_caches:reset
) and then re-run. At this point, if you got another complete green run (for safety just run your new tests) any new cache files are associated with your tests, and these should be checked in to ensure that your new test/functionality will run the same everywhere.
However the above is complicated and we are actively looking for some sort of testing solution that allows us to avoid the intermittent failing tests (maybe we need to upgrade capybara) and maybe dropping the whole caching approach is one way forward. Please jump in our Slack channel and let us know what you think :-)
Currently Airbrake automatically opens github issues when we have an error on production. We suspect that a good portion of them are related to performance, i.e. heroku's business model is based on limiting our memory size, and when we run out of memory then some requests die giving the run for longer than 150000ms errors or what have you, e.g.
Ultimately the solution there is getting more money to pay heroku, migrating to azure/dokku or re-engineering the events system so that it's not so memory intensive (or finding and fixing memory problems elsewhere). Others are probably unfixable, or at least untraceable: e.g.
and others probably represent places where our code should be more robust:
e.g. AgileVentures#2515 and AgileVentures#2506
We are very grateful to https://github.com/nisevi for spending time closing airbrake issues out and putting them into groups
e.g. AgileVentures#2238
someone with some spare time could usefully triage them into the sets that https://github.com/nisevi made since the profusion of airbrake issues is kind of a drag to seeing what's going on. All of the sets have the "airbrake" and "help wanted" labels, and mention "airbrake related" in the description:
Sensibly any new airbrake issue would be compared with that set, and either added to the summary tickets (with a link ref) and closed, OR used as the basis for a new set
Currently https://github.com/tansaku is project managing WebSiteOne. The project manager will review your pull request as soon as possible. Usually the project manager will need to sign off in order to merge a pull request.
The project manager will review the pull request for coherence with the specified feature or bug fix, and give feedback on code quality, user experience, documentation and git style. Please respond to comments from the project managers with explanation, or further commits to your pull request in order to get merged in as quickly as possible.
To maximize flexibility add the project manager as a collaborator to your WebSiteOne fork in order to allow them to help you fix your pull request, but this is not required.
If your tests are passing locally, but failing on CI, please have a look at the fails and if you can't fix, please do reach out to the project manager.