开发者

Best practice for Scrum "done" concept in JIRA [closed]

开发者 https://www.devze.com 2023-01-16 02:56 出处:网络
Closed. This question is opinion-based. It is not currently accepting answers. 开发者_C百科 Want to improve this question? Update the question so it can be answered with facts and citations
Closed. This question is opinion-based. It is not currently accepting answers. 开发者_C百科

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 5 years ago.

Improve this question

I work at a small service based company where we are starting to implement Scrum practices, and we are also starting to use JIRA with greenhopper for issue tracking. Our team has defined "done" as:

  • coded
  • unit tested
  • integration tested
  • peer reviewed
  • qa tested
  • documentation updated

I'm trying to figure out whether this should be done using a separate issue for each item in the above list for each "task", or if some of these items should be implemented in the ticket workflow, or if simply lumping them together in one issue is the best approach.

I'm disinclined to make these subtasks of a task, as there is only one-level nesting of issues and I fear there is a better use for that capability.

I also am not too excited about modifying the workflow, as this approach has proved to be a burden for us in other systems.

If all of these items are part of the same ticket then that seems weird to me because the work is likely spread between multiple team members, and it'll be hard to make tasks that are under 16 hours that include all of those things.

I feel like I understand all of the issues, but as of yet I don't know what the best solution is.

Is there a best practice? Or some strong opinions?


Done is done - it has to be all those things you defined, however treating them as steps explicitly with a bug tracker can have the undesired side effect of encouraging divisions within then team and throwing stuff over the wall. So coders would claim they are done once ticket is marked "coded" and "unit tested", testers when marked tested etc.

This is exactly the opposite of what Scrum intends to do - the whole team commits to doing the stories so that they meet the definition of done in the end. So even though some of the elements of achieving done are indeed steps one should be very careful with solidifying these steps in any kind of defined workflow.

(This btw shows nicely why using a bug tracker as a scrum tool is a bad idea. Those are different tools that should be optimized for different things - even if linked together through some APIs.)


I certainly wouldn't nest them, since they are steps common to each task. Making them subtasks would just increase the complexity and boilerplate of the system. These seem like perfect workflow stages to me.

Something like Submitted->Assigned->Coding->Review->Testing->Finished.

Where Coding requires "coded", "unit tested", and "integration tested" before moving to Review, Review requires Peer Review before moving to Testing, Testing requires QA Testing before moving to Finished.

The only reason this would be tricky is if you're allowing Peer Review and Testing to be done in parallel. I see problems with allowing that, since if the code fails peer review and is subsequently changed it invalidates the testing work done by QA.


  • coded
  • unit tested

IMHO these belong together, as both should be handled by the same person (preferably TDD, which really makes it impossible to separate these).

  • integration tested

In our team, this is usually done by the same developer, so we typically do it as part of the above task. Other teams may do it differently.

  • commented

Do you mean code comments? Then, to me, this does not deserve a separate task. Otherwise, please clarify.

  • peer reviewed

A separate task for a separate developer (or more).

  • qa tested

A separate task for testers / QA personnel.

I would add documentation - it may not always be needed, but often is. Again, it should be a separate task, typically for the same guy who did the implementation (but not always).

One prime concern to practically all the Scrum teams I have been working with so far is to make sure that nothing important is forgotten from the above. Partitioning into distinct tasks may help this. Then you can clearly see in your backlog what's left to do. Lumping all of these into one task makes it easy to forget about this or that little detail. For us, it was most typical to forget about code review and documentation, that was the main reason why we turned these into independent tasks.


Done defines what the Team means when it commits to “doing” a Product Backlog item in a Sprint. Some products do not contain documentation, so the definition of “done” does not include documentation. A completely “done” increment includes all of the analysis, design, refactoring, programming, documentation and testing for the increment and all Product Backlog items in the increment. Testing includes unit, system, user, and regression testing, as well as non-functional tests such as performance, stability, security, and integration.

Reference: Scrum Guide - Written by Ken Schwaber and Jeff Sutherland (Inventors of Scrum)

You state that you are following "Scrum Practices". It sounds to me like you are just using a few parts of the Scrum Framework and not others, is that true? First of all, Scrum is not necessarily a practice, it is a Framework, you either use the framework or you don't. It works on the basis of inspect and adapt, so apart from the basic Scrum framework rules, nothing is set in stone, so you won't get an exact answer to your question. The best way to know the answer is hire experienced Scrum Professionals, and Experienced Developers and Testers and try the above done plan in your Scrum Team.

Remember always Inspect and Adapt. There are three points for inspection and adaptation in Scrum. The Daily Scrum meeting is used to inspect progress toward the Sprint goal, and to make adaptations that optimize the value of the next work day. In addition, the Sprint Review and Planning meetings are used to inspect progress toward the Release Goal and to make adaptations that optimize the value of the next Sprint. Finally, the Sprint Retrospective is used to review the past Sprint and determine what adaptations will make the next Sprint more productive, fulfilling, and enjoyable.

Do not spend loads of time documenting or looking for a solution to a given Process problem because most of the time the problems change faster than you would realize, it is just better to inspect and adapt provided you have at least the basic knowledge of scrum and you are using the Scrum framework and not just a few Scrum like practices.


We use a pretty similar system in JIRA and I have an open question here and on the Atlassian boards asking a very similar question. We have a similar definition of done. We create the main story in descriptive form i.e. "The legend text on the profit and loss graph overlaps". We then define sub-tasks which are either of type 'technical' or 'process'. Technical tasks are the actual work of implementing the story "Research possible causes on vendor site", "Implement fix in the infographic class". Process items include 'Peer Review', 'Make Build', 'QA Testing', 'Merge'. As one comment noted you may have QA going on before/during Peer Review. As a part of the Scrum process we have QA going on nearly all of the time (they are part of the team) sometimes they sit with the developer, sometimes they get 'bootleg builds' to run in a test environment. This is exploratory testing and is considered part of the coding process to us. The sub-task for 'QA Testing' is for integration and regression testing and is a final validation of the whole story after Peer Review is completed. By that time the QA team already has a complete test plan they worked up during exploratory testing and it's typically just a matter of running through the plan and 'checking it off'.

We've gotten to this point after running sprints for a year and making changes during the retrospective. I'm open to suggestions as I think one of the downsides to the retrospective is that you can group-think yourself in one direction with little hope of ever backing all the way out and considering a different path.


We use two boards for this purpose. We have one board for the Development Sprint where "Done" is Ready for Testing. You can't enter a sprint unless you're well and truly ready to start development (all analysis done, estimates done, people know what they are supposed to be doing - all the conversations have been had, shall we say, though our conversations tend to take place in JIRA Comments given the distributed team) ... and you exit when you finish development. That's the best way to track whether our development team is meeting their own goals without being impacted by QA. Meanwhile, QA uses a Kanban style board and they go from "Ready for Testing" (this is their "to-do"), through In Testing to Ready for Release.

We switched to this because we previously had all these steps in a single board, and we weren't "meeting our commitments" within any sprints because there was no way to both develop & test all in a single sprint, where we have to do a code migration to the QA environment for final testing to occur, although testing is happening all along the way. We are still trying to figure out how to do things correctly, so this may not be the right answer, and yet it sounds like it's not something you've thought of, so maybe it would work for you.


and it'll be hard to make tasks that are under 16 hours that include all of those things.

This is your real issue; ability to break down stories into small useful vertical slices of functionality. Working on this will make your team more agile and give the PO more flexibility.

To the contrary, breaking down the work by process/mechanical step will only make you less agile and really serves no useful purpose. Either you are done or you aren't; no one cares if you are dev complete and not tested so don't bother tracking it by the hour....its waste.

Refocus on your stories, not on tasks.


We use subtasks.

Given that the story is a shared item (the whole scrum team works on it), we use the subtasks as 'the post-it notes' allowing to track tasks which individuals need to tackle.

We don't require that every little piece of task is represented as a subtask.

We are not bookkeepers, but developers.

The team agreement is that if you can't take up a task immediately, just jot it down as a subtask to the story. (Using the agile plugin, it is really easy). ie. we will never have systematically a subtask 'create unit test', but in some occassions, when someone is struggling to get that dynamock up and running, you will see this subtask popup in the story. Having it there allows the team to discuss it during the scrum.

If you want to generate the checklist automatically, look at the create subtask on transition plugin. https://studio.plugins.atlassian.com/wiki/display/CSOT/Jira+Create+Subtask+for+transition It allows you to automatically add the subtasks when the story has been committed.

BTW - JIRA is more than a bug tracker. We are using it in a wide variety of applications, including the management of our sprint activity. (as an Atlassian partner, I'm biased :-).

Francis


Important thing is that you use sub-task as real task; not as activity of main task. Issue tracker is primarily meant for what you are doing; not how you are doing and in what order.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号