Latest Entries »

Too Much Work in Process?

credit: Richard Smith via Flickr

Creative Commons  Credit: Richard Smith via Flickr

Does your team’s burndown chart look like one below, where a small number of stories accepted during the first part of the Sprint and then acceptance rate hockey sticks towards the end of the Sprint?  In the graph, you can see that acceptance rate is less than 10 story points (the green bar) for the first 14 days of a 21 days Sprint.  And then it gradually rises to 20 and rockets up in the last 3 days.  This graph  is for a Scrum team that follows a 3-week Sprint duration.

 

Iteration Burndown Chart

Iteration Burndown Chart

 

 

Or represented a different way, here is team’s cumulative flow diagram from the same Sprint.  Note, the team, within first 3 days of a 3-week Sprint, has put 46 out of 66 story points in play?

Cumulative Flow

Cumulative Flow

Does your team see any ill-effects with taking on too much work in process?  Does the testers on the team get squeezed trying to jam through 40-60 hours work effort into the last 1-2  Sprint days?  Are they compelled to make risk-based testing decisions?  Does your team, despite its best intention end up cutting corners by writing crappy code, carrying over defects and accumulating technical debt?

Does your team put in overtime to complete all the in-flight stories during the end of the Sprint?  During your story demonstrations, does your team leave your Product Owner disappointed, because it can’t accomodate simplest changes and force her to write new stories in the backlog to cover minor changes?

If your team activity resembles these charts, or encounters some of these symptoms, it is possible that it is taking on too much work in process (WiP).

 

Little’s Law and Queuing Theory

Let’s do a quick review of queuing theory and something called Little’s Law as formulated by John Little.  Little’s Law is expressed using a simple formula:

L = λW

Here,

L =average number of items in the queuing system,

W = average waiting time in the system for an item, and

λ =average number of items arriving per unit time

 

Sometimes, Little’s Law is also restated as: CT = WIP/TH.

Where, CT = λ = cycle time, for the average time when an item enters a system to the time it exists a system; WIP = L = work in process, the number of items in a queue and TH = W = throughput or the average output of a process.

It follows from this that the size of your queue is directly proportional to your throughput.

So, if you are trying to improve your cycle time, like how fast a story is accepted, then you have two options:

a)     Improve the throughput of your process/system

b)    Decrease the average number of stories you work on simultaneously (the WIP in the equation – the queue)

Typically, throughput is a measure of productivity and may largely depend on a team’s skillset or on the system the team operates in.  To make productivity or structural changes take time.  For example, a team might realize they can get more efficient by putting in place test automation and save time as compared to manual testing.  However, a team may not have anyone on the team with test automation skills.  So, it will need to acquire those skills, which means, a team member will need training, and then learn to implement test automation.  These activities take time.

Or say, your team needs to promote code to the staging environment as a part of your story acceptance criteria.  But, to promote code you need a deployment resource from the release team.  And if this resource is shared between few teams, as is typical, it would mean systemic wait time, till the resoruce is available to complete the task.  You can decrease this wait time by: 1) get a deployment resource dedicated to your team, who can do code promotion; 2) have a team member acquire those skills; or, 3) if you have someone with those skills on the team, lobby for changes to department policy that requires that only a deployment engineer can do deployments to staging environment.  To implement any of these  options will take some time to put in place.

So the best lever a team has, to decrease cycle time, is to decrease the average number of stories they work on at any given time.

 

 Why is Less WiP Better?

Queues are all around us: traffic during morning and evening commute, waiting for a restroom at a ballpark during half-time, a doctor’s office, waiting at a grocery checkout counter, and many more.  In lean terms, queues are considered inventory and inventory represents waste.  And I am sure when impateintly wait in a line, we can attest that it is a waste of time.

With Scrum, you are already working in delivering working software more frequently compared to traditional methods.  So, many Scrum teams may not even care that within the Sprint boundary, there is too much WiP.  Besides, Agile principle only states, “deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale”.  But, nowhere does it say, reduce your WiP even further.

Or it could also be that teams prefer higher WiP to provide them with that extra impetus to complete their work as the  Sprint-end deadline approaches.  Or a team doesn’t mind putting in extra overtime during the last week or last few days of the Sprint.

But, I suspect many teams and team members want to achieve realistic and sustainable pace in meeting their Sprint goals.  They are motivated in getting constant feedback with their business partners or POa by showing working software, early and often.  And they don’t want to systematically sucuumb to bad quality or crappy code.

As a matter of fact, Rally Software (Disclosure: I am Rally user), which conducted empirical analysis on teams and summarized their findings in The Impact of Agile Quantified (Note the paper is behind an free access wall).  They found that teams that aggressively control WiP, cut their time in process in half.  And those that control their WiP, have 4 times better quality compare to teams with the highest WiP.  So if faster feedback (or faster time to market) or higher quality is desired, it pays to lower your WiP.

But, note that there is an optimal WiP for your team.  If you push your WiP too low, you can lower your productivity.  It was found that teams with very low WiP (0-2 WiP) had lower throughput.  So there is certainly a balance that needs to be struck between too much and too little WiP.

 

Ways to Decrease WiP.

Generally, if your team uses some Agile Management software, they probably have a built in report that can show your team’s Cycle/Lead Time.  Once you know the Cycle/Lead time, you can start setting up targets to decrease the WiP.  So for example, if your team’s cycle time is 10 days, and your Sprint backlog is 10 stories.  Your team might decide to set a target of starting with only 5 stories, get them designed, analyzed, developed and tested.  Review the stories with the PO, get feedback, and get them accepted and move on to next set of stories.  This is the basic concept of Scrumban, melding the Kanban practice of controlling WiP within the Scrum framework.

In order to lower your WiP, each team will have to experiment to see what works best.  Here are some potential things to try:

  • Work with the PO to de-construct features into smaller stories during Backlog Refinement meetings so that they can be potentially completed within a week.
  • Set a team agreement on having only X number of stories in-progress.
  • Set general guidelines on how to “swarm” when team WiP limit is met.  For example, if WiP limits are met, an idle team member could: 1) help take up task on existing WiP story 2) pair up with someone to learn skills of a constraint skillset 3) Refactor code or test automation code 4) Swat down defects, 5) improve autoamted test coverage.  Have team members “swarm” on a few stories, instead of trying to put all work in play.

As you try to limit WiP, even with the above swarming guidelines, you will find that some team members might idle.  And it will feel counter-intutive.  But, you are not trying to achieve optimal efficiency at an individual level, but at a team level.  If you manage to lower WiP, the team will see that it cycle time is improved, quality is better, feedback loops are tighther,  there is less risk in meeting Sprint goals, and potentially a team will acheive more sustainable pace, experience less volatile team velocity.

Reduced to soundbites the lower WiP mantras are:

Don’t WiP it, Ship it.

Stop Starting, Start Finishing.

 

References:

Paper on Little’s Law with few examples:  http://web.mit.edu/sgraves/www/papers/Little’s%20Law-Published.pdf

Poppendieck, Tom and Mary, 2007, Implementing Lean Software Development: From Concept to Cash,  Addison-Wesley Professional.

 

US National Archives - Creative Commons image

Workers assembling radios. Photographer Lewis Hine. US National Archives – Creative Commons image

Typically in plan driven model, scope is fixed and the cost and schedule are variables.  Many large scale software projects were and continued to be implemented this way.  In many instances, when a particular scope is desired within a giving time-frame (fixing the schedule), plan driven projects add resources.  But as we know simply adding resources to a project doesn’t always bring about the desired goals.  As a matter of fact, if resources are added late on a software project, it actually has an adverse effect.  This was indeed observed by Fred Brooks in his book The Mythical Man Month, also known as Brooks’ Law: “adding manpower to a late software project makes it later”. 

With Agile software development, the triangle gets  inverted – where cost and schedules are fixed and the scope is variable.

Waterfall-Agile Triangle

Even for Agile projects, there could be instances when a fixed scope is desired within a given time period.  For example, when an external entity drives a compliance or mandated project.  In those instances, the only lever stakeholders may have is adding resources for a delivery with fix scope and schedule.  But, are there better ways to add resources or capacity to an Agile project without falling in the same traps of experienced with plan driven methods?

Before we answer that, lets briefly consider teams on Agile projects.  Agilists  have generally made 2 recommendations for effective teams:

  • Keep team intact or stable for a long periods of time
  • Keep team size between 5-9

A recent paper called The Impact of Agile Quantified seems to support these two recommendations out through empirical data.

Stable teams, defined as having less than 10% membership change in a 3 months period vs Unstable teams defined as 40% variance in their membership tend to:

  • Have more throughput (volume or work)
  • Have less variance in their  throughput (they are more predictable)
  • Have less defect density (higher quality)
  • Have less time in process (better time to market)

From the same research findings, teams of 5-9 team members have the most balanced performance when it comes to predictability, productivity, quality and responsiveness.   Generally, smaller teams (of 1-3 people) are 40% less predictable and have 17% lower quality; however, they do have 17% more productivity.

Instability in Agile teams can happen sometimes which may not be in direct control of the resource managers.  Like, for example, a team member gets promoted or changes positions or leaves the company altogether.  In some instances, a resource manager may also have to move team members due to skills match, team dynamics or performance considerations.

So given these findings, what would be the best options in adding resources to an Agile project or a release?  Creating a whole new cross-functional team of 5-9 team members during early to mid part of the project along with a Product Owner and a ScrumMaster (presuming you are following Scrum) would be the best option.  This will allow added capacity without running foul of the Brooks’ Law.

But if the budget add doesn’t allow addition of an entire new Agile team, then other viable options becomes little less ideal.  One option would be to add team members to an existing team to make them more cross-functional or remove a crucial skills constraint, so the team becomes more self-sufficient.  The other option is to shore up smaller teams that have less than 5 team members.

Alternatively, you can level-off a larger team and form 2 teams.  Take for example, there is additional funding for 4 team members and Original Team A makeup: 9 team members + PO + SM.  Form Team B: 5 team members – 3 new team members + 2 team members from Team A (presumably a team member can initially serve as SM initially) and  a PO.  Now New Team A would be: 7 team members + PO + SM.  Since, both teams would be within the ideal size range, and assuming you are able to maintain the cross-functional nature of both teams, you can still reap the benefits of higher throughput, better quality, more predictability and better time to market.  Again, these aren’t the best options because affecting existing well performing teams will inevitably create an initial setback as new teams re-form and then re-norm.

Ideally, if short term results are desired, say within a quarter, scope reduction is probably still the best option instead of disrupting Agile teams.

Recently, an article that I co-wrote with my colleague, Dorothy Murray, on Abnormal Termination of a Sprint got published on the Agile Atlas.

The site is managed and curated by Ron Jeffries and Chet Hendrickson.  The site’s purpose is to become an encyclopedia of information about Agile and related methods.  Currently, the site is supported by Scrum Alliance.  So has more Scrum-centric information.  It is organized into Core Scrum, Common Practices and Commentaries.  The Core Scrum has Scrum Alliance sanctioned description of Scrum.

Our article got published as part of Common Practices, which are considered generally consistent with Scrum, and useful in many cases, but those that haven’t risen to Core status, yet.

In articulating agile requirements, best practice is to write user stories. A user story follows the 3 Cs process – Card, Conversation, and Confirmation.  At team level, a product owner who represents the business owns the story card.  The product owner through the card briefly states from a business perspective what she wants the delivery team to build.  The card is generally stated in the user story format of As a <user role> …I would like <build functionality>….so that I can <achieve a business goal>.

The story card was originally intended to be written down on a 4×6 index card.  The requirement in this case are barely sufficient to convey the intent and idea of what needs to be built and the details are left for latter conversation with the team and the Product Owner.  The acceptance criteria were sketched out on the back of the card.  The card serves as a reminder for a future conversation.  In the traditional software development, going through staged gates and going through functional silos, meant more emphasis on writing things down.  That is how we did handoffs from one stage to another, from one functional team to the other.

With Agile, we sketch out our idea about the requirements with the assumption that at the onset of the project, we don’t always have perfect information and that information and situations do change on the ground during the implementation of the project.  We also have cross-functional teams which obviate the need of writing detailed specifications.  So for example, a cross-functional team with development, QA or test, analysts, and database resources won’t need explicit documentation as each of these team members participate in all conversation regarding their backlog and projects.  These conversations could in formal meetings such as planning sessions and story grooming sessions or informal conversations with Product Owner and other stakeholders.

So far so good, all of this has now been well understood with the basic agile approach to requirements articulation.  But, with the use of Agile Lifecycle Management Software, sometimes Product Owners and teams have tendency to forego the brevity required with the use of 4×6 index cards.  Soon, if you are not careful, a story card starts resembling software specifications documentation.  Similarly, scaling Agile to multiple teams and multiple projects sometimes necessitates that the Product Owner role would get further refined.  Leffingwell highlights two distinct set of responsibilities: one that is market/customer facing product managers and another that is solution/product/technology facing product owner (Leffingwell, 2010, Loc 4078 of 10384).  At a high level, Leffingwell assigns these responsibilities to each of these distinct roles:

Agile Product Owner Agile Product Manager
Product/Technology-facing Market/customer-facing
Co-located (may report into development/technology) Co-located (may report into marketing/business)
Focuses on product and implementation technology Focuses on market segments, portfolio, ROI
Owns the implementation Owns the Vision and Roadmap
Drives the iterations Drives the release

With this type of distinction, it generally falls to Agile Product Manager to own the higher level Feature and Epic definitions at the program/project level; whereas, the Agile Product Owner is focused at the sub-epic, story level with their respective teams.

As the Product Owner responsibility gets split, product management and their respective teams have a tendency to revert back to form and rely heavily on process, and on the ALM software, which then starts resembling the traditional requirements gathering process.  Product Managers may engage in heavy requirement sessions and considers that to be the end all and be all.

But fundamentally, the concept of 3Cs doesn’t go away with Agile scaling – the Agile Product Manager can still maintain just enough documentation at the feature and epic levels and then use conversation with other Product Owners to drive the details, just as a Product Owner would do for lower level epics and story with their respective teams.  The concept of having information tomorrow than today still holds.

Mike Cohn makes 3 points regarding writing requirements down in details and relying on it as a primary communication tool (Cohn, 2009, pg 237): 

  1. Written documents make things official, team members will suspend judgement about challenging it.
  2. Written documents tend to discourage everyone from iterating over the intent and meaning as we do in conversations
  3. Written documents are instrumental in creating seuqential hand-offs and they tend decrease the whole-team responsibilites

So the key then remains in having conversation within the product management team and then laterally with the agile teams so that ambiguity can be driven out, intent can be clarified, and the story boundaries can be defined, on a continual basis as new information gets uncovered about a project.

References:

  • Leffingwell, Dean, 2010, Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise, Kindle edition, Addison-Wesley Professional.
  • Cohn, Mike, 2009, Succeeding with Agile: Software Development Using Scrum, Addison-Wesley Professional.

I will be reprising my PNSQC 2012 talk on Cambia’s Transition to agile at Agile PDX on March 1, 2013.  Looking forward to meeting with folks and sharing thoughts on scaling agile in the enterprise.

Event link: Agile PDX March 2013 Event.

Links to PNSQC Paper and Presentation titled On the Way to Meeting a Mandate: Transitioning to Large Scale Agile are located on the PNSQC site:

Abstract

Paper

Slides

Slides + Notes

 

I have written about Regional Leadership Forum in the past.  This year they are celebrating their 20th anniversary.  As a 2011  NorthWest Forum colleague, Tammy Neeley, and I, both are featured in the video.  My segment starts at about 6:20 minute marker.  In the fast-moving Internet world, Warhol’s dictum of everyone gets their fifteen minutes of fame is now reduced to 15 seconds! 

I don’t recall the whole interview and what I exactly talked about, but it is indeed flaterring to be included in this video. 

 

(image from geograph.org.uk)

In one of my previous post titled You Might Not Be Agile If , I outlined some agile anti-patterns.  Recently, I come across couple of anti-patterns that make good fodder for discussion.  Both these patterns are something a Scrum Master should be able to avoid, corrrect.

Anti-Pattern: Only using partial team to size the backlog

 
The first one is where a team used only developers in sizing stories.  Well, is it any wonder then, that the team discovered itself in bind? Testing tasks turned out to be too large.  Surprised? The sprint committments got  jeopardized?  And is it suprising that team members ended up putting in extra hours to try and salvage the sprint?  In this particular instance, the reason was:  some team member felt the sprint planning sessions were getting too long.  So a select few team members pre-sized the backlog and tasked out the stories ahead of the planning meeting.
 
Now, there is a reason why we strive to construct cross-functional teams.  And we prefer using a consensus based estimation process such as planning poker.  We want to avoid, what some quant-driven companies have started calling –  HiPPOs hijacking and monopolizing the decision making process.  HiPPO stands for highly paid person’s opinion or alternatively, a highly positional person’s opinion. 
 
If a team lead, a Scrum Master, a Product Owner, a single team member, or a functional group “Hippos” the team, then that stunts the team from self-organizing.  You are back to command-and-control decision making structure and you miss out on the wisdom of the team – not very agile!  The planning poker or similar agile estimation process involves all team members, and it expects that all views and perspectives are allowed to be expressed, shared and discussed.  Two good things come out of the process, one is that you get a team members’ committment, not just compliance.  The other is that you get a much better and realistic view of your work, and you have intrinsically motivated agile team that owns the estimate and will do everything in its power to deliver on it.
 
But aren’t agile teams suppose to move fast?  Time-boxes are of course very important, but so is discipline.  If you don’t involved all team members, or if you short-change the consensus building process, you get uneven results.  What’s more involving the team allows it to gain tacit knowledge through meetings and conversations, which makes them better at getting things done.   
 
At the same time, agile teams are not deliberating, debating bodies.  They exist to produce great working software.  Reason given here was that the team wanted to have a shorter planning meeting.  So let’s talk about that.
 
The general rule of thumb that has worked for many teams is to allocate 1-2 hours per sprint week for the planning meeting.  Say you have a 3-week sprint, then normally your team should allocate anywhere from 4-6 hours of planning.  To make sprint planning go faster, there are few practices you can follow:
  • PO has already communicated her priority stories for the upcoming sprint during grooming sessions, which are taking place during off-planning weeks.  All prioritized stories are well understood and sized by all team members.
  • Most agile planning software allow you to copy a story.  So a good time saving practice is to create a “template” story which contained often encountered tasks.  The Scrum Master as part of his sprint planning session preparation can then copy this story, overlay the actual story card and the acceptance criteria.  This approach has another benefit too.  You can embed tasks associated with your story done criteria, which then serves as a good checklist for the team.  For example, if  your team forgets peer code review task on every story, then having an explicit task  as part of the template serves as a good reminder for the team to include for every story.
By the way, the team size matters too when it comes to having an efficient sprint planning meeting.  Having a goldilock-sized team of 7 (plus or minus 2), allows consensus building to occur without a huge time sink and makes for faster sprint planning.  I recall, a team that chose to go with 15 team members, their planning meetings used to drag out to almost two full days (14-16 hours) for a two week sprint!  How productive is that?!
 
Anit-Pattern # 2: Committing on more work than what can be delivered.
But is there maybe another deeper anti-pattern as a cause for this team that chose expediency over allocating “right” amount of time for sprint planning.  Could be that, they are trying to shove a whole lot more work than they can realistically take on during a sprint?.  (No time to waste, can’t you see, I have more code to shovel over to production, every day, so I will take any and all shortcuts possible, especially if I can avoid having uncomfortable conversation with my stakeholders!)
 
If that is really the issue, than there are good practices you can follow.  I have found that when doing capacity planning, it is better to start the team’s capacity at 80%.  So, if you have 14 working days (counting 1 sprint day towards demo, retrospective and planning meetings for a 3-week sprint), each team member will have , roughly, 90 working hours (14x8x0.8).  Using 80% capacity means that you do not have to account for tracking all the overhead minutiae such as company or team meetings, emails, water cooler chats, birthday celebrations, or just team members taking time to be helpful, or honing their own skills.  (An aside: this is also another reason, why I think it is a bad idea to use agile lifecycle management software for project time tracking.  It distracts from delivering good software and makes accountants out of team members).
 
Then, you subtract any “planned” PTO time.  Next comes any production support work that your team generally has to respond to.  The percentage varies from team to team.  Using a rolling 3 sprint average to even out the spikes as your starting point and adjusting according to your circumstances, works.  It is also a good practice to task out 1 or 2 more stories, over and above your team velocity.  So in case, you end up using less than your allocated production support capacity, you have a ready story to work on.
 
And finally, subtract any earmarks from your capacity.  For instance, many teams negotiate with their Product Owner and stakeholder, part of their capacity towards fixing defects, or doing test automation, or for code refactoring. 
 
If your team doesn’t account for all these things up front, you are probably operating on an illusion that you are going faster.  In reality, you are probably spending more and more time in remediating code and responding to production emergencies.  If you want to read more on this anti-pattern, check out Mark Lawler’s recent post called Sprint Planning Like It’s 1999
 
You say,  if I do all these things, there is no way, we are going to converge towards our project goals?  Maybe.  But, equipped with data, you may have a difficult, but an honest conversation with your stakeholders.  Should quality and delighting customers be sacrificed to meet the deadline, or should other creative solutions be explored?  Should we invest some time in building good software, or should we pay a whole lot more later?

RLF Interesting Reads April 2012

This is third in the series of occasional blog entry that highlights interesting reads I have come across that track the original 30+ odd books that were part of the reading list for Regional Leadership Forum in 2011.  (Note: RLF 2012 list is in large part the same as 2011 list).

To start off, two books that explore global theme of financial crisis with its origin on Wallstreet and its ramification across the globe, closer to Zakaria’s The Post American World:

  • Satyajit Das provides an insider’s perspective on global financing and financiers in Extreme Money: Masters of the Universe and the Cult of Risk.  Though, the book sags in the middle and the only real policy recommendation I glean from the book is to bring Glass-Stegall back, overall it is a fantastic read.  Das was one of the few who have been warning about dangers posed by derivatives, long before the 2008-2009 credit crisis.  Real insightful stuff.
  • Michael Lewis does riveting storytelling on the financial crisis, in his latest book – Boomerang: Travels in the New Third World, .  Lewis is the author of Moneyball: The Art of Winning an Unfair Game, which got made into oscar nominated movie starring Brad Pitt (book added to  the future read list).  He is also the author of The Big Short, another book on the financial crisis.  Lewis generalizes and  stereotypes whole countries and its people.  But, if you can get past that, Lewis presents a sharp and intriguing account of financial disaster and how it rippled through countries like Iceland, Ireland, Greece.  He then turns his gaze on the role of Americans and Germans in this mess.  Blow them bubbles and welcome to the new third world!

Next on the list is a book and an article about Neuroscience – along the line of Medina’s Brain Rules (with touches of Mackenzie’s Orbiting the Giant Hairball):

  • Jonah Lehrer (a former neuroscience lab assistant and neuroscience writer), in Proust was a Neuroscientist, writes about the intersection of neuroscience and art.  His thesis is that artists and art arrived at many principles that Neuroscience is just now shedding light on.  Lehrer’s treatment of the subject is not as rigorous as Medina, and bit on the speculative side.  He contends that there are certain limits to reductionism and the scientific method and that art can possibly provide insights.  He says the two streams have diverged too far, and implores that they need to continue having the dialogue to push at the edge of human knowledge.
  • Jonah Lehrer writing for the The Guardian examines The neuroscience of Bob Dylan’s genius.  The article starts with these words, “Bob Dylan looks bored” and then goes on to talk about how he wrote Like a Rolling Stone.  It is edited extract from his new book: Imagine: How Creativity Works (another book added to the future read list).
 And lastly, to round the out the list – Suzanne Collins’ The Hunger Games Trilogy.  Not sure if there is any resemblance to any particular book on the RLF reading list.  But, the set is ubiquitous and got a boost from the release of The Hunger Games movie.  These are good books, which the entire family can enjoy and discuss.  And something you can finish off during planned time off  (as I did during Spring Break PTO).

What’s on your reading list?

Mark Lawler in his post about story points, says he wants to “raise a glass of Champagne to toast the folks who came up with the concepts of ‘story points’.  You know that way of estimating work without actually saying anything useful or making any commitment to your business customers?”

I can certainly see how the whole idea of stories and story points can be confusing, especially to new teams and certainly to business customers.  But, I would like to offer a counterpoint and say story point estimation is a very useful, and efficient technique. 

Teams are asked to make a binding estimate, generally, at the beginning of a project, when they have the least amount of familiarity and information about a project.  Here is how Scott Adams illustrates this point through Dilbert:
 
 
Let’s look at what a story is.  A story is a bite-sized, digestible “requirement” that can be completed in a time-boxed interval.  It is stated from a perspective of a real user.  A story is just enough information written down and a promise between the product owner and the delivery team to have a future conversation about the rest.  Both the product owner and the team understand that there is some amount of uncertainty involved.  The product owner is not expected to know every detail of the story upfront, and the team doesn’t always know how they are going to produce that work.
The alternative used to be a detailed business requirements document or a product requirements document.  The PM and the business stakeholder then use these documents to obtain an estimate from the delivery team.  Or many times, engineering managers or leads arrive at these estimates, even though, they might not actually do this work themselves.  Teams are then held to this estimate, no matter what new information the team learns about themselves and about their project as it unfolds.
So how good are humans at estimating in absolute terms?  See if the following conversation sounds familiar to you:
An engineer’s significant other is preparing tacos for dinner.
Significant Other: How many tacos should I prepare for you, 3 or 4?
Propeller-head Engineer:  Well, let me see, I am a little hungry.
Significant Other: So 4?
Propeller-head Engineer:  I am not sure.  I had a smaller breakfast than usual.
Significant Other: So you are really hungry?  Make 5?
Propeller-head Engineer:  But, then I had this big lunch when I went out with my friends.
Significant Other: So then…3?
Propeller-head Engineer:  But, see I then went to the gym in the evening?
Exasperated Significant Other: So you’re hungry again?
Propeller-head engineer:  But, see my workout was really “light”.
Angry Significant Other: Then you are not that hungry?  Darn it, Propeller-head, will you just tell me how many tacos do you want to eat?
Propeller-head Engineer:  Tell me what kind of beans are you using?  Are you using sour cream, or guacamole?  How caloric is the salsa?
So when it comes to estimating and committing, we tend to over-analyze and take too much time to arrive at a precise answer.  On the other hand, we are actually good at comparing.  Back to the Significant Other and Propeller-head Engineer:
Significant Other: Damn it. Propeller-head, why do you always have to be so difficult?  Tell me, are you hungrier compare to last Friday, when we had tacos?
Propeller-head Engineer:  Yes I am.
Significant Other: You ate 3 tacos then.  How much hungrier are you compare to then?
Propeller-head Engineer:  I am little more hungrier than that, so let’s go with 4.
Bam, having a reference point helped speed things along (that or it must have been the wrath of the Significant Other)!  Estimating is relative sizing, so a team compares one story to others and arranges in generally like-sized groupings.  The story points are just a numeric values we assign to these piles.  Three considerations go into sizing a story:  effort, complexity and doubt.  So along with effort and complexity, uncertainty is built into these estimation.  What was the alternative in the past?  As other agilists have pointed out – the delivery teams were trained  in stating precise inaccuracies.  At the inception of a project, when the agile team and the business partners have the least amount of information for their projects, if we ask them to be prescient and produce accurate estimates; they will produce estimates, and precise ones, at that.  Just not accurate ones!
 
For example, I was just reminded, at a recent training, regarding the concept of accuracy and precision.  If I state that the value of Pi is 3.7909430, it is very precise, but it isn’t very accurate.  If I say 3, I am accurate, but not very precise.  When we are doing newer things, we are not always so good at them at first.  We, in the software business, to use an analogy, are more like chefs, and less like short-order cooks.  Chefs concoct new recipes, so it is hard in beginning to accurately and precisely figure out how long it will take to come up with a winning recipe.  A short order cook on the other hand, is handed a recipe and has a pretty good idea of how much time it will take to cook up the recipe.  Software engineers, if they are more familiar with their problem domain, can become more accurate as well as precise.
Picture a conversation between a PM and a delivery team:
PM: We need to produce an estimate to develop an integration with our business partner.
Delivery Team: Ok, do you have the integration specs? Do you have API Specs from the partner?
PM: No we don’t, but assume that we will communicate via FTP.
Delivery Team: And how many “functions” do you think we are going to need.
PM: Assume about 10 functions.
The delivery team talks amongst themselves.
A Dev Type: Let me look at our existing code for an hour – and I will tell you.  He returns two hours later.
Dev Type: Look, we already have done this type of work.  The existing sFTP protocol we  developed has about 15 functions, which took us about 6 weeks to develop and test,  So I think, we can turn this around in 2 weeks.
A test type:  And it will take me 2 weeks to test.
Delivery Team to the PM: We can do it in 4 weeks.
PM: Well guys, sorry to break this to you, but we need this within 3.5 weeks, otherwise, we will be off our critical path.
Two weeks later.
PM: Hi guys, the specs came in, our communication is via a web service.
A stunned Delivery Team: But, we haven’t done any web services development, and certainly not with this partner.  What’s more, the infrastructure folks alone are going to need a week to open up the firewall and communicate with our partners.
PM: I know guys, this is tough.  Oh and one other thing – since we got the specs a little later than expected, we will need you to get this done within next 3 weeks.  Sound good?  Thank you guys, you are the best team I have worked with.  I know you will come through.
 
The PM, in the above example shorten the duration, but did not take into consideration the effort involved.  This generally happens when stories are express in ideal days – effort or duration get enmeshed.  Or at least, effort gets overlooked.  in which case earth days, become just as arbitrary as story points.  Comparing one team’s estimate against another becomes just as problematic as with story points.
Extending Mark Lawler’s airlines example, if say you were giving an estimate of 30 minutes delay, both times.  But say, by two different airlines – Southwest Airlines and United Airlines, which one would you trust?  Most people would probably pick Southwest as they have a stellar on-time record.  With United, who knows?  Unit of measurement is still hours, but Southwest and United Airlines estimates aren’t comparable as United doesn’t have a comparable on-time record.
 
With story point, if our customers have another key piece of data – the team’s velocity, they can also start getting better predictability.  Even when plans change, which we know, they inevitably, do.  Let’s say for example, an agile team estimates its release backlog contains stories worth 100 story points.  After sprinting for 3 sprints, they establish their average rolling velocity to be 10 story points per sprint.  Now based on this, the product owner will know that it will take another 7 sprints to exhaust the backlog (70 remaining points).  But, let’s say if the product owner estimates her “must have” story cut line is at 60 points.  She can now see that her team can achieve this with 3 more sprints (30 story points achieved and 30 more expected).  If she thinks of adding 5 more story points of “must have” scope.  She can also predict that it will take 4 more sprints (65 total must have story points).  Or say, if she wants to de-scope 10 must have story points, she can predict that the team can deliver this in 2 more sprints (20 remaining must have points).  What’s more, she doesn’t need to go back and worry about the fact that Ralph is taking a 3-week vacation or that Sally has a series of 2-hours doctor’s appointment.
 
Credit Clark and Vizdos
 
Story points, combined with team velocity, can produce fairly accurate estimates, faster.  In order to get better at estimating, more often than not, the contributing factors are that a delivery team stays constant, understands their problem domain, establishes a rhythm, and constantly inspects and adapts.  They can get better at estimation, irrespective of whether they use points, “gummy bears”, earth or martian days!  But, estimating in story points, has an advantage of not spending a whole lot of time in arriving at precise but inaccurate estimates, and avoid the trap of confusing effort and duration when contrasted with estimating in days.
 
And one last thing, just the other day, the cable guy promised he’ll be there to repair cable service, some time between 8-12pm (not very precise) and showed up at 130pm (not very accurate, either)!

Just One More Level

TaeKwonDo Practitioners (© Aashish Vaidya)

As transition goes, shedding the old ways and adopting new ones are fraught with doubt and confusion.  This is true of any large organization transition effort as it is for scaling agile practices across the enterprise.  One of the challenges for enterprise transition community leading this change effort is figuring out how to move people with different understanding and different needs from novices to experts.

Many times, a model or a construct of learning helps us classify how to approach people with different levels of understanding and teach them new techniques.  Alistair Cockburn introduced the concept of Shu-Ha-Ri to software development.  Shu-Ha-Ri is borrowed from the martial art practice of Aikido.

Here is how Martin Fowler describes it:

Shu-Ha-Ri is a way of thinking about how you learn a technique. The name comes from Aikido, and Alistair Cockburn introduced it as a way of thinking about learning techniques and methodologies for software development.

The idea is that a person passes through three stages of gaining knowledge:

  • Shu: In this beginning stage the student follows the teachings of one master precisely. He concentrates on how to do the task, without worrying too much about the underlying theory. If there are multiple variations on how to do the task, he concentrates on just the one way his master teaches him.
  • Ha: At this point the student begins to branch out. With the basic practices working he now starts to learn the underlying principles and theory behind the technique. He also starts learning from other masters and integrates that learning into his practice.
  • Ri: Now the student isn’t learning from other people, but from his own practice. He creates his own approaches and adapts what he’s learned to his own particular circumstances.

One of the common refrain you hear from many people who have learned some basics about agile methodologies is to say, ” see we understand the agile practices, but how are we going to do this differently here?”  The questioners, here might presuppose that certain practices just won’t work in their environments and want to start tailoring things right out the gate.  Especially, when they have learned that agile is all about adaptability and change.  So there is this predisposition to jump to the Ri stage.  But, agile practices are supposed to be an experiential process.  You do things, reflect on what worked and what didn’t , in short you inspect and then you adapt.

But, many balk at directive practices in the Shu state as it run contrary to the agile manifesto, itself.  Here is Rachel Davies, co-author of Agile Coaching who takes on other agilists, in a post  called Shu-Ha-Ri Considered Harmful:

I’m uncomfortable with approaches that force students to follow agile practices without questioning. These approaches seem to violate the first value of the Agile Manifesto “Individuals and interactions over processes and tools.” I question whether introducing agile software development techniques to people is anything like martial arts training. Software development is knowledge work and our aim is to build a team of reflective practitioners. To do this we need to engage with how people think about their work. Are techniques from physical arts that build muscle-memory really applicable here?

For me, agile Boot Camps and Shock Therapy approaches lack basic respect for the team’s unique context and the experience of people on the team. Agile software development is a much looser discipline than a martial art like Aikido. Organizational culture and nature of the product being built are major factors in what agile techniques the team will benefit from most. If we establish a sensei-novice model, we’re not fostering the independent thinking and reflection that will take the team beyond the Shu level.

To some extend this is a valid argument.  You have to respect the individuality of the team members and allow them to question the practices they are supposed to be following.  And invariably boot camps and shock therapy approaches will only have an ephemeral effect like a motivational speech would.  You are pumped up for a while and get a boost of energy, but the sugar high wears off quickly.

But at another level, this is a very shallow read of martial arts and the sensei-novice model.  Even cursory look at history of martial arts would suggest that they are not merely about physical activity, but means to develop deeper connection to moral and spiritual dimensions.  Just as you wouldn’t confuse the practice of yoga to be only about physical well-being.  That is just an aspect called hatha yoga , but in larger context yoga has much more to do with “knowledge work” than building simple muscle memory.  Albeit, the knowledge is different, not software development variety.

The issue isn’t necessarily with the Shu-Ha-Ri construct, but how it might be used within the context, and it is more about its understanding and its implementation.  Further on in the post, Davies calls out the real peril:

Installing a basic set of agile practices by force can be done quickly so the organization starts getting benefits from new ways of working faster. Teams are superficially at the Shu level in the space of a few weeks. Often, the management team considers the agile rollout is now complete. It’s assumed that teams will continue to apply what they’ve learned. But without any experts around to enforce agile practice, pretty soon a team falls back to their old ways or sometimes worse carries on with agile practices that don’t make sense for their project.

I was pleased to see “cargo-cult agile” called out in the new book “Practices for Scaling Lean & Agile Development” by Craig Larman and Bas Vodde. They say “Avoid forcing–When coaching we encourage: volunteering; do not force any agile or lean approach onto people; people should be left the choice to think and experiment…with concentrated long-term, high quality support. The best, the most sticky adoptions we have seen had this approach.”

In a large organization, there is rare chance that you will encounter people who do not fall into all three categories of learning: the complete novice, who just wants to be told what should he do next; the intermediate, who knows agile practices well enough to start digging into deeper underlying theory and principles; and experts, who are adept at reading the context and tailoring their own practices to continually achieve business goals.  For enterprise transition community leading adoption of agile in their enterprise, the goal is obviously to encourage people to think critically, experiment and continuously learn, and deftly deal with people at all 3 levels.  Anders Ericsson, a cognitive psychologist, who developed the popular “ten thousand hours” theory of mastery has a second prerequisite for expertise – “the notion of deliberate practice, which describes the constant sense of self-evaluation and a consistent focus on one’s weaknesses rather than playing on one’s strengths (ref Maria Popova’s blog post).  This notion is something the enterprise transition community and agile coaches need to be aware of, and one which our software brethren who design video games understand real well:

[The “zone of proximal development” is] the idea that learning works best when the student tackles something that is just beyond his or her current reach, neither too hard nor too easy. In classroom situations, for example, one team of researchers estimated that its’ best to arrange things so that children succeed roughly 80 percent of the time; more than that, and kids tend to get bored; less, and they tend to get frustrated. The same is surely true of adults, too, which is why video game manufacturers have been known to invest millions in play testing to make sure that the level of challenge always lies in that sweet spot of neither too easy nor too hard.”

The challenge, then is to figure out a mix of practices that you know the teams will be able to take on, and add that 20% “stretch” practices, which allow the teams to flex and get to another “level”.  And hopefully soon, they will internalize what gamers who are hooked – they beg for that 5 extra minute to complete one more level!

Rachel Davies continues:

Learning new ways of working takes time.  As Ron Jeffries once said “They’re called practices for a reason.  You have to have done them. Practice makes perfect.” If you base an agile adoption on Shu-Ha-Ri model, the trick is to remember the goal is beyond the first-level. Your teams need more than training. Allow plenty of time and on-going coaching support for teams to get them into the Ha phase and beyond.

The constant care and feeding of agile teams will be needed at least till the organization moves through the Ha stage.  After all, you come across many agile teams, who have practiced agile for years, but are stunted in their growth.  These teams are upper bound to their organization’s proficiency in new techniques and inextricably linked to its culture, its inertia and change aversion, which doesn’t allow continuous improvements to take place.  So then the Shu-Ha-Ri model can still be useful model, provided that the community understands that you have to look beyond the rollout of initial agile training and project kickoffs.

PS: A parting note, though this was much before my time, if you want to see Ri practitioners in action, then watch the video of these two cats, who delivered a 90% improvised piece, but still based on an underlying musical framework.  They are still grounded in the principles and theory of their craft, but their uniquely tailored performance fits the context, and with their virtuosity, they transcend the rules to create a masterpiece – in essence, they make their own rules.

Follow

Get every new post delivered to your Inbox.

Join 37 other followers

%d bloggers like this: