This is a continuation of a post on how to structure your agile sprint schedule.  In Part 1, we discussed approaches to sprint length and the planning process.  In this post, we will examine testing and release.


Developers should be writing unit tests for their code.  In test-driven development, these are written first. I don’t strongly advocate for this approach, but I do insist on unit tests being included in any reasonably sized code check-in.  Unit tests should not be left to the end of the sprint, or rolled into tech debt.  Also, you should have a continuous integration system in place that runs the full set of unit tests against the code base periodically (ideally on every check-in).  If a unit test fails, the issue should be corrected by the associated developer immediately.  I have seen continuous integration runs result in increasing numbers of unit test failures, to the point where they become pointless.  Ensure you enforce the discipline within the team to prevent this.

Your QA team should be involved throughout the sprint process.  Don’t just pull in a QA team member at the end of the sprint to conduct regression testing.  A representative from QA should offer input into the planning process and should work with developers to test components of sprint deliverables as they are built.  These  front-loaded tests are very valuable, as they can flag issues early in the sprint.  Otherwise, all the issues will accumulate to the final regression test.  As the sprint end is approaching, the team should try to taper down code commits representing new features and allow the QA representative to focus on end-to-end testing.  Bugs that surface from full integration testing can be addressed in the final days of the sprint.


I define the release process as the activities that occur between the final planned code commit and the point at which the code is stable in production.  With this definition, release includes the following activities:

  • Branch Cutting.  Depending on how your team manages its code base, you will likely create a branch off of your main code line for the sprint release.  Often called the “release candidate”, this separate branch allows your team to test and apply fixes to the release candidate in isolation from the main code line.  The main code line may continue to get changes for the next sprint, without blocking developers while final regression testing occurs.  Your release engineer (often a designated developer on a small team, or a dedicated role on a larger team) will coordinate and drive the branching activities.
  • Unit Test Run.  Following the branch cutting, you will want to run your full suite of unit tests on the release candidate in your continuous integration environment.  Any unit test failures should be addressed by the developers immediately.
  • Regression Testing.  Your QA team members will install the release candidate into a dedicated test environment.  They will run through their suite of regression tests.  Some of these may be automated (ideally), but they will likely need to conduct a good percentage of tests manually.  Particularly if your application has a rich UI and complex business logic, having a set of manual “sanity” tests is important.  As the QA engineer finds bugs, they should log them for developers to fix.
  • Launch Planning.  As the QA engineers and developers are wrapping up the regression test process, the release engineer and team leads should conduct release planning.  This involves determining the sequence of steps required to move the release code to production.  This includes a plan for any database schema changes, with details of how those will be applied. Also, a list of post-launch checks should be assembled.  These will likely include business metrics and application performance monitors.  Finally, a rollback plan should be determined, in the event that there is an issue with the release.
  • Release.  Once QA has signed off on the release candidate and all bug fixes have been committed, then the team is ready to start the actual code release.  At Zoosk, we would conduct a quick stand-up with all sprint participants to review the launch plan and ensure that post-release checks were delegated out.  This stand-up also included the product manager(s), so that they were aware of the imminent release.  After code was pushed live, release checks are conducted.  Once all release checks are cleared, the release engineer calls the release stable.  If a release check indicates an issue, the team can troubleshoot it quickly and determine if a rollback is required.

Other Considerations

  • Holidays.  If a holiday occurs during your sprint, just subtract that day from the number of days available in that sprint.  I don’t recommend that you extend the sprint by a day, as this would likely break the integrity of the Monday-Friday sprint schedule.
  • Progress Checks.  While your team(s) will report on sprint status at daily stand-ups, for larger teams it makes sense to have a formal check mid-sprint on progress.  For a multi-week sprint, this usually works best on the Friday in between.  Participants should be the product manager and team leads.  At this progress check, the participants should review completed versus planned work items and gauge the likelihood of completing all items by the end of the sprint.  If it appears that all work items will not be completed in the second half of the sprint, then the product manager and team leads should determine a course of action to take. This can involve reducing scope of a work item or scheduling it into a future sprint. Planning for developer “overtime” as a means to catch up should be avoided, but does represent an option in some cases.