-
Notifications
You must be signed in to change notification settings - Fork 82
Retrospectives
Enrico Seiler edited this page May 12, 2020
·
15 revisions
- We discussed the proposal for switching to a single review system:
- One review only
- Only exception: If reviewer is non-core developer and PR does more than renaming/restructuring (anything else?), a second review from any core developer is required.
- In the discussion, the majority voted for keeping the 2-review system with the following remarks:
- In a PR series that splits a larger connected work into smaller digestible pieces, the same first and second reviewer should be assigned so that they are able to follow the big picture.
- The first reviewer shall assign the second reviewer on approval.
- Any core-member can delegate to any other core-member in the second review as they see fit.
- For small separated things use the suggestion and multi-line suggestion feature as much as possible.
- Give push-access to reviewer so they can already push small suggestions (code alignment,
const
, alphabetical ordering etc.) right away. - Otherwise, all old rules still apply:
- Do not force-push except necessary or already approved.
- If applicable, make single commit PRs that can be squash-merged without manual rebase.
- Politely contact persons if they are not responding to review requests.
- Reassign the PR if you are assigned but have no time to review them.
- ...
- First retrospect after everyone went into home office and we introduced the project board.
- In general the board structure is well received and helps organising the work
- Some of the issues that were observed:
- It takes a long time until reviewers respond to PRs
- everyone should try to respond quickly
- Please, try to politely inform or re-request reviewers if they are not responding (be aware of the individual situations of the reviewers during the current COVID-pandemia)
- Iterations to short - it often happens that it takes a week to get to know the problem and the solution behind the issue and then another week to make the final PR which is not finished within the iteration then.
- This opens the following question: Are the issues not ready for sprint yet? - If more unclear things are encountered while working on the issue it should be labled with
needs refinement
and then discussed within the team or some of the core members. - Try to split the issues into more smaller parts that can be easier tackled
- Actually, we should use the Mondays meetings to discuss the iteration issues and the planned issues.
- Everyone should participate in writing user stories about things they encounter or request.
- Here is the gneral lifetime of a story:
- Write a user story explaining the rational - it should contain all information necessary to see the benefit of the work (you can try the
as a ... I want ... such that ...
template to focus on the important things) - The story is placed into untracked backlog items and discussed on the next stratgey meeting (remember non-core developers can also join, especially if things are related to their work)
- In agreement with the Stakeholder the item is moved into the backlog or the current release cycle.
- If it is planned for upcoming iterations it will be moved to
planned
column and discussed within the team. Here acceptance criteria and tasks should be determined and noted on the story. - If it is discussed and refined it can be labled
ready for sprint
and then tackled in the next sprint. - And as described above if something unexpected happened or something was not clear it has to be rediscussed.
- Write a user story explaining the rational - it should contain all information necessary to see the benefit of the work (you can try the
- There is no place where we can freely drop stupid questions. (Also there are no stupid quesions but only stupid answers :D)
- We setup a new gitter channel called Coffee-Kitchen where we can discuss things more openly or about completely unrelated work stuff.
- We can use the thread feature of Gitter to continue discussion of a particular item without spamming the main discussion board.
- Celebrate the release. πΎ
- Involve the entire team in the release process.
- Keep the changelog up to date (make it part of the Definition of Done).
- Release more often.
- Automate release process as much as possible.
- Themed release, e.g., next release could be about search related functionality; has the advantage that breaking changes might be limited to one module and comes with lots of great improvements; less scary for users.
- Good release notes matter; essay form and adding some jokes.
- Binds users who read it more strongly when they feel that the release is really about them.
- Advocates the issues of external people that were fixed.
- In the issue ticket write a description that can be used inside of a release note. Show some code snippets as if it were documentation. Label these features to mark them for a release note. On release gather these and only make some general editing.
- Describe the problem in a non-technical jargon -> good start for a user story; helps other team members to tackle the issue and implement a solution.
-
24th: BIOSTEC Conference
- New training material for filtering/processing bioinformatic files (BioC++ curriculum).
- Comparison to BioPython, BioJS, BioPerl, Seq, BioJulia etc.
-
29th: Modernising apps with SeqAn3
- Lambda almost done.
- Next application to modernise?
- Setup infrastructure for future applications.
- Probably have time until 15.06. (next report)
- 22th-28th: Developer retreat
- Organise talks.
- Plan work to do on retreat.
- Integrate external projects.
- 3rd: KNIME Spring Summit: Variant Calling Pipeline
- Prepare tutorial with KNIME.
- ISMB BioC++ course: refined material/new material for BioC++ course
- Prepare course for participants.
- 1st: Support for additional wokflow systems
- Finish support for KNIME (CTD).
- Add support for CWL.
- Maybe separate Argument Parser Project.
- 5ht-9th: ECCB BioC++ course
- New material for distributed search.
- Compare with solution that uses simple splitting of indices.
- What else should be supported or demonstrated:
- Updates to database (insert?, erase?, edit?)
Application | search | alignment | range | io | argument_parser |
---|---|---|---|---|---|
LAMBDA |
sdsl::bit_vector ; implicit sentinels; EPR dict; search interface; performance |
vectorised (banded) AA alignments | ? | faster seq_io; fast align_io; BLAST out | CTD support for nested arg_parser |
RNA Aligner | $LAMBDA; adapted search interface for RNA structure profile | vectorised alignment with PSSM; vectorised (banded) myers'/Gotoh; MSA | ? | fast seq_io; fast align_io; MSA formats | $LAMBDA |
DREAM Index | $LAMBDA; IBF; nested IBF; partitioned IBF; JST search | vectorised (banded) Myers'/Gotoh; wavefront alignment | journaled string; journaled string tree | fast seq_io; fast align_io; JST format | $LAMBDA |
RNA Mapper | $LAMBDA | vectorised (banded) Myers'/Gotoh; wavefront alignment | ? | fast seq_io/align_io | $LAMBDA |
iGenVar | ? | breakpoint refinement: vectorised (banded) convex; wavefront | BAM utility views? | fast BAM in; BAM index; tabix; annotation format (vcf/bcf) | $LAMBDA |
Motivation: User satisfaction
- Tools are like a suite.
- Applications have same user interface:
- Similar options have the same name.
- Similar options are placed under the same sections.
- Similar sub parsers the have same name.
- Same repository structure.
- Same project README: badges, platforms, versions etc.
- Tutorials that explain the usage.
- Possibly how-tos to move from a competitive application to this tool.
- API docs for an application.
- Template git repository to copy from.
-
main
with Argument Parser setup. - Setup CI with GitHub actions.
- Setup nightlies.
- Unit test infrastructure.
- Test coverage.
- API documentation -> methods in app should be documented as well.
- Micro benchmarks.
- Macro benchmarks.
- Build dependency to library -> when library builds through, builds applications -> if built deploy nightly snapshot.
- Github pages with corporate design.
- Familiarise with SDSL and open tasks: EPR dict, implicit sentinels,
sdsl::bit_vector
with wasting bits, ... - Finalise search interface.
- Performance benchmark of search.
- Investigate compile time.
- Investigate performance.
- Work in sprints.
- Regularly meet to plan upcoming sprints.
- Feature/task description must be self-explanatory so that everyone in the team can work on them .
- During a sprint each team member pulls work from the list of tasks.
- Each team member should not always tread the easiest path but sometimes choose a ticket from a lesser-known module or field of expertise.
A feature is ready when all these points have been fulfilled:
- Fully implemented and approved.
- Unit tests pass.
- Test coverage > 97%
- Micro benchmark.
- API documentation.
- Tutorial/teaching material added.
- Macro benchmark with <= 10 % performance difference to SeqAn2 (if applicable).
- Tests should compile in less than 30 seconds.
- Changelog entry.
- Openly communicate in team if retrospect is missing or if topic was not dealt with.
- Have regular retrospects.
- Try NextCloud and Exchange as possible team calendar.
- Lengthy discussions on GitHub -> maybe pair programming?
- Naming
- Design decisions
- A lot of different opinions
- Non-full-time employees lack days in which they can't review
- Too many review cycles where we find new stuff
- 1-2 days of waiting
- Certain reviews take long if only one person is available
- A rebase often causes that the changes are not visible anymore
- Long periods of time between review and applying changes
- Limit the round of reviews! No more than 3.
- Use new commits for review requests and squash them later!
- First look into the PR and make your notes, than see the developer one-on-one if it is a big review or if there are a lot of things you feel that they need to be clarified.
- Script to show how many reviews a person has
- Reassign if you don't feel if you have the time
- Communicate who is available for reviewing (maybe on Gitter)
- Use Gitter more often for group communication
- Resolve things that were resolved on GitHub
- Keep them as the group feels they are beneficial
- Track each persons Todos in Trello
Present: 5 members
-
Summary of the first iteration:
- 26 planned tasks
- 10 active tasks
- 4 closed
-
The team agreed that @rrahn takes on the role as an agile coach for the team.
- This means that he his responsible for the execution of the agile environment and consultation during the transformation to become more agile in our team.
-
Feedback of first "Story Gatherings":
- In general there was a really positive effect and every member in the team had a good impression of it.
- There was some discussion: Some members that only recently joined a specific project had the feeling that this was the first meeting where actually the clear picture, vision and reasoning for the project was communicated. Accordingly, the meetings were quite important but not as a story-gathering meeting but rather as a kick-off meeting.
- A detailed discussion with the project members of the team after the meeting led to the first real initial stories that now can be more refined.
-
General feedback:
- There is a big concern within the team that our PRs take quite long.
- Looking back to the last 30 days with closed PRs it took on average 67.8 days to close them.
- The following list identifies some of the reasons:
- Too much focus on code formatting, naming issues, etc. (Is there a lack of a common coding/naming style? Is there missing domain or technical knowledge?)
- Too big. (What is a reasonable size? What would be a good measure, e.g. LOC?)
- Too many rounds of reviews. (Too big? Not focused enough during the review? Missing technical/domain knowledge?)
- No response in time. (which channels to communicate? when to look into a PR?)
- Hard to track changes. (Squashed/force-pushed commits)
- The team decided to actively protocol how PRs are reviewed during the next iteration to get a clear understanding of what takes how long. When does it happen? Why might this happen? Who was the reviewer? Note the last one is not meant to blame anyone but to figure out if there are large gaps in domain/technical knowledge that results in long discussions or many changes? How long do you need to wait between re-requesting a review and receiving the new one etc.
Tasks for next iteration
- Discuss the topic for the next iteration and define tasks for it.
- Have another story refinement session to work out more fine grained stories (not tasks!) from the initial stories.
- Every team member should actively protocol why PRs take so long until they can be merged (see point 4 above)
- Seen from the perspective of the reviewer.
- Seen from the perspective of the reviewed person.