Skip to content

Latest commit

 

History

History
62 lines (34 loc) · 4.15 KB

delivery_standards.md

File metadata and controls

62 lines (34 loc) · 4.15 KB

Delivery Standards

We realise a few things around deliveries:

  • Every customer has different needs, and so adapting to the most applicable way of working rather than forcing an entirely cookie-cutter approach is beneficial

  • Teams on the ground are usually best placed to make decisions about the best way to structure their work

  • Evolving the way we approach our deliveries is desirable as we mature our agile practices and learn from past successes

That said, there are a number of delivery norms that we expect teams to adopt by default. Where teams wish to deviate from these norms, we expect them to do so consciously, and with documented and justified reasoning.

1. Hold a standup every morning

Every team is expected to hold a standup involving Engineers, Delivery Managers, and wherever possible customer teams. Standups should focus around the Trello board for transparency and to visualise bottlenecks. Standups must use the yesterday/today/blockers format, and care should be taken to avoid standups turning in to customer status reports.

2. Hold a retrospective at least every 2 weeks

Every team is expected to hold frequent retrospectives to help improve the team's effectiveness.

3. Hold a showcase at least once per week

Every team is expected to have their next showcase scheduled, and to be clear on what is planned to be showcased. Teams must perform practice runs of their showcases. Teams must always showcase from pre-production or production environments. A showcase must conclude with a plan for the next iteration.

4. Practice Continuous Delivery

Every team is expected to practice Continuous Delivery from the first iteration. Deployments must be automated to pre-production and production environments. Automated tests must be run as part of an automated build cycle.

5. Productionise Application

Every team is expected to deliver a production-ready application from the first iteration. The team must provide centralised application and infrastructure logging, application exception tracking, application performance monitoring, and automated dependency and security upgrades.

6. Use Pull Requests

Every team is expected to make use of the Pull Request workflow, and work in appropriately short-lived branches. Teams should not allow Pull Requests to span multiple days work. Pull Requests should be reviewed by at least one other person before merging to main.

7. Deliver Done features

Every team is expected deliver to the definition of done. Teams must not consider a feature done until the following criteria are met:

  • Cross-browser tested
  • Visually checked against designs
  • Refactored and readable
  • Optimised, response time is acceptable
  • Has an appropriate level of test coverage
  • Can be deployed to production and is ready for users

8. Maintain up-to-date roadmap and epic boards

Every team is expected to keep their roadmap and epic boards current.

9. Bring forward uncertainty

Every team is expected to bring forward uncertainty. Where an iteration contains more risky features, teams must work on these first. Where possible teams should make use of spikes ahead of iterations starting in order to mitigate or remove uncertainty.

10. Communicate regularly with customers and each other

Every team is expected to be in contact with their customer at least once per day. Individuals in teams should communicate when they're unclear on any aspect of the delivery. Teams must not defer communication until standup. Teams must have a conversation with their customer as soon as any iteration goals become at risk. A customer must not be negatively surprised by progress at a showcase. Teams should send a weekly summary email to their customer stakeholders and for inclusion in the Made Tech TGIF email.

11. Maintain quality

Every team is expected to maintain a high level of quality, both at a code and application level. Teams must fully test their features as part of a deployment to a pre-production environment. Teams and customers should share an understanding of the desirable behaviour of all features in the current iteration. Teams must apply good scouting practices to improve the quality of every codebase they work on.