Betsy the cow

January 9th, 2009

As an Agile coach I enjoy the opportunity to learn from people implementing scrum framework and incorporating Agile values and principles. In this process I come across interesting stories and metaphors. After an introductory training session with one team and some follow up coaching during sprint planning, Paul (team member) shared his impression of the situation the team was put-in with this new scrum and Agile thing -

Back on the farm, if a cow isn’t producing more milk than cost of the feed she eats, unless someone really likes scratching her between the ears as a pet, sooner or later she ends up as hamburger.

I must admit that when I heard of it for the first time, I felt very uneasy. It is a very blunt analogy getting straight to the primary design benefit of scrum - Maximize Return on Investment (ROI). If at the end of every sprint the organization does not believe it can get sufficient ROI over the cost of development, then there are “Or Else …” scenarios that the organization can pursue. Scrum does not tell you about what you should do, it simply brings transparency into the system and provides opportunities for early adjustments. The thing that concerned me was that the team had likened the “or else…” scenario with Betsy ending up on organization’s menu. At that time I expected the team to get over their initial fears and move forward with a more positive outlook. It was not to be so, a sprint later I saw a picture of cow-cuts on top their team task board.

Had this team drifted deeper and darker with Betsy’s supposed fate?

Successful cross-functional teams that I worked with, have a sense of identity that is separate from their job titles or departmental affiliations or other organizational chart bindings. As a coach, the simplest thing that I have done to trigger such a sense of identity, is to facilitate scrum teams through an exercise to help them create their team name. Although I floated this idea of creating a team name to the team described above, they did not feel it was necessary for them to do so. Instead this team had adopted Betsy as their team mascot and coalesced around the notion of saving Betsy from her predicament sprint after sprint after sprint.

Over the last 10 or so two week sprints, I have had the pleasure of working with this team intermittently and I have observed the team adopt Betsy into everything they do and innovate along the way.

Before I delve further, some context about this team is due. Each of the team members has a minimum of at-least 15 years of software industry experience. The team works for a well known insurance provider and does sustainment engineering for all business processing applications. They interact with customer account managers that are transferring big organizational health care insurance benefits to be managed by their company, customer service personal that are serving individual insurance beneficiaries and operations support that manages changes to production systems. Like many scrum teams they are in a very complex environment. Their complexity is compounded by the urgency of fixes requested from the business side and from the necessity of their operations and DBA group to ensure quality of fixes. They are not immune from challenges associated with organization silo’s wherein the DBA group cannot dedicate any member to the scrum team so as to form a truly cross-functional scrum team. On the plus side they have an excellent scrum team very well supported by their ScrumMaster and ably directed by their Product Owner, who is from the business side of the organization.

So what does it mean to save Betsy within their context? There are many dimensions to addressing this question.

  • From a software delivery perspective, the team commits to delivering software product fixes and enhancements to production environment every sprint. I remember this interesting conversation where the team was discussing their definition of done. Question arose, whether they should commit against a definition of done when elements of done-ness, such as production database updates, are beyond the authorized scope of the team members (DBA group owns production updates). At the end of their dialogue every one in the team agreed that only software that is in use by end-users constitutes valuable software. Although they do not directly manage production updates, they believe it is their responsibility to shepherd updates into production environment. In effect, every sprint they committed against a definition of done that required product backlog items to be implemented in production environment. This required them to engage representative from DBA during their daily stand-up, working with the DBA group to work with their constraints, such as no production updates on Wednesdays & Fridays. Also building automation test scripts to validate quality prior to review by DBA’s, effectively reducing rework cycles.
  • Engaging sustained participation from business. Prior to their scrum implementation customer issues and requests from business side were some times, lost in the ether. It was not unusual for the team to come across requests or tickets that were more than a year old. Through regular prioritization by the product owner the team was able to focus on the highest priority fixes that needed to be addressed. The ScrumMaster was very effective at challenging the team to change long held organizational-cultural habits. One such behaviour of their silo-ed environment was that of communicating ticket status through the bug tracking system. All too often tickets that were resolved and ready for functional acceptance were not looked at by the business person for closure. This resulted in delayed production updates for several resolved issues. Team members started interacting in face-to-face conversations with business people to better understand their tickets and following up with them to confirm functional acceptance. Such proactive steps drastically reduced wait times involved.
  • Complete transparency. Here’s a snapshot of their task board :

  • Being the only scrum team in their organization, sustaining healthy relationships within the team and with others who interact with the team is critical. One of their approaches is to manage this through recognition: Every sprint the team recognizes people who have contributed to save Betsy. This cute little award
    is given to people within the team or outside the team for their contribution towards saving Betsy.
  • Addressing root cause for recurring issues. The team routinely analyzes commonly recurring issues and implements fixes that addresses the root cause for these issues.
  • Having fun! team members flex their creative muscle every sprint by chronicling events of their sprints in news print format themed around Betsy, astutely weaving their sprint happenings with current affairs of the world at large. Hear are a few examples, . The team ScrumMaster, Stan’s favorite is this one where Betsy takes to skies. . The entire success with scrum is really due to his efforts and leadership. He has been very open minded and very smart, to let the team run with Betsy and see how it would play out with the team and the rest of the organization. Its one thing to float an idea, and quite another to really figure out what is going to work, and what might not be a good idea to fit in with the bigger picture.

This approach at recording sprint facts and events is sooo innovative/cool/awesome that my team used this technique to do our sprint retrospective. During our retrospective we formed pairs and spent 30 minutes each to create our own version of scrum times. After that we shared our impressions of the sprint via the news-prints that we had created. It was a fun exercise and the discussions were so much richer as each pair accentuated aspects of sprint dearer to them.

There are many other things that directly or indectly relate to saving Betsy.The thing I find most interesting is the emergence of a unique and compelling purpose within this team. It enables them to innovate in all areas: people, process, tools, software delivery. To sum it up in the words of Paul Opryszek

Betsy was born in rustic King county in mid 2008. Her dairy farmer was Paul Opryszek who was fortuitously struck by a bolt of Agile lightening that restored Vision as well as Belief in Truth, Beauty, and Resource Allocation sanity

Experience , , , , , ,

Making sense of Best Practices

December 28th, 2008

A Best Practice is a collection of tools/techniques/approaches/methods when applied in prescribed order delivers desired results effectively and efficiently. “Best” in the term “Best Practice”, to me, implies that there is no further room for improvement, ever! There seems to be an implied sense of finality, why use anything less than the best? There also seems to be an implied sense of universality, wherein application of Best Practices under all circumstances will yield desired results most effectively and efficiently.

These Best Practices have come from well minded people who have shared their successes at solving problems in a given domain. People have either intuitively found certain practices to be superior and best suited to solve some problems or they have iterated over and tried multiple solution paths, to solve a problem over and over again and optimized their approach until Best Practices for their context has emerged. This is not to say that there exists a Best Practice for all problems. In HBR’s Nov 2007 Article: Leadership Framework for Decision Making, the authors assert that Simple Contexts are the domain of Best Practices. As per the article, the characteristics of a Simple context;

Simple contexts are characterized by stability and clear cause-and-effect relationships that are easily discernible by everyone. Often, the
right answer is self-evident and undisputed. In this realm of “known knowns,” decisions are unquestioned because all parties share an
understanding. Areas that are little subject to change, such as problems with order processing and fulfillment, usually belong here.

Best practices are suited only with in a simple context. Take for example the case of stolen/lost credit cards. The banking and credit card industry has faced this problem over and over again. Solution to this problem for both the credit card holder and the credit card company has been codified into best practices. This has been possible through simplification of the context through technical improvements in supporting infrastructure and collaboration between various credit agencies to simplify their domain. And not the other way around. Best Practices have not enabled simplification of the context, actually because of simplifications in the context Best Practices have emerged/formulated. I believe that instead of seeking Best Practice solutions for one’s operational context, one should instead focus on simplifying the context towards achieving effective and efficient means (practices).

The solutions all are simple…after you have arrived at them. But they’re simple only when you know already what they are.

Zen and the Art of Motorcycle Maintenance - Robert M. Pirsig

My take of why Best Practices are believed to be so great is because Best Practices carry a sense of assurance to provide, consistently, most effective and efficient results. Desirability for these benefits trigger transfer of Best Practices from one organization to another. This transfer happens via cross-pollinating agents, most likely to be consultants.

Richard Dawkins, 1976 in his book “The Selfish Gene” coined the term “meme” as an analogy to the concept of gene. A meme is any unit of cultural information, such as a practice or idea, that is transmitted verbally or by repeated action from one mind to another. Examples include thoughts, ideas, theories, practices, habits.

Within today’s corporate culture, the notion of Best Practice is a meme.

Richard Dawkins:

“If a meme can get itself successfully copied, it will”.

The Best Practices meme has a strong pull from receivers (demand), after all who doesn’t want the best. And I would argue that there is much stronger push from the suppliers (consultants) to sell Best Practice solutions in order to satisfy receiver’s need.

Richard Dawkins:

“Effective memes will be those that cause high fidelity, long lasting memory,” and not necessarily the ones that are “important or useful.”

Best Practices meme have high fidelity between both receivers and suppliers when they are simpler. Meme’s do not replicate exactly as the original when they get complex. For it to be memorable it hinges on simplified core elements that stick in the minds of people, easy to be mimicked for further replication. Either that or in the melee of buying & selling services, quick fixes to long standing problems and trivialization of complex contexts into simpler ones, unwittingly or purposely people have creatively oversold Best Practice solutions. In this process activities/practices that have no demonstrable track record for betterment have been tagged to be “the best” and effective practices are stripped of their contextual elements making them sell-able to wider domain of problem statements.

So, are there any Best Practices? - I think not. I am firmly in the camp with many others who have suggested to get rid of the term Best Practice. Few alternatives that I’m aware of are - “Good Practice”, “Current Thinking”, “Contextual Practice”. For now, as an alternative, I will personally settle for “Good Practice” since it does not carry the sense of finality and allows room for improvement. Conceptually I do believe that with in simple contexts certain Good Practices can provide effective and efficient results. However, I strongly urge that Good Practices should not be blindly accepted for its apparent goodness. All practices need to be tried out atleast once with in your own context to determine whether a given practice is good for you or not. A good practice in essence does not carry with it the assurance that what worked for me will also work for you. I am comfortable accepting practices and judging their “goodness” against my objectives and not benchmarking these against others who have apparently the best implementation of a best practice.

Uncategorized , , ,

Velocity

November 24th, 2008

Def: Velocity is the amount of product backlog that a team can fully implement through product owner acceptance with in a given sprint.

The amount of product backlog in the definition above is often expressed in terms of “story points” or “ideal days”. Fully Implement implies that the product increment built during the sprint is accepted by Product Owner and is potentially shippable or at-least meets the definition of done. Also, velocity measurements are made only at the end of every sprint.
.
Track Record
The purpose of taking velocity measurements is to capture team-system’s track record at translating product backlog items into acceptable working software. This track record is typically expressed in total number of story points (velocity). Traditionally projects have been estimated prior to the start of the project. Estimates at their best are educated guesses. Guesses - none the less, based on assumptions that need validation from reality. In traditional project management this aspect of validating assumptions, made during the start of the project, is severely lacking during project execution. Project management is then reduced to protecting the planned estimates as opposed to achieving desired goals. Iterative delivery of software every sprint and taking velocity measurements for every sprint has negated the need to make large inaccurate estimates for the entire project. Instead small inaccurate estimates are made for each sprint. And that is a good thing! velocity works with the fact that estimates are inherently inaccurate (cone of uncertainty).
.
Velocity acts as a correction factor.
This is how: Say a team estimates (makes an educated guess) that they can fully implement 40 story points of work in a sprint. At the end of the sprint if only 20 story points are fully implemented and accepted then the team’s velocity for that sprint is 20. Velocity is informed by reality. Going forward, in the next sprint, if the team is consistent with their estimating technique then they can reasonably expect to complete about 20 points of work. Disciplined velocity measurement provides correctional ability to re-estimate amount of work that can be reasonably expected to be done in the next sprint. Thus allowing for reliable commitments for a given sprint.
.
Velocity and commitment.
It is important to note that velocity does not imply commitment. Most of us understand this however we behave at odds with our understanding. Too often I have observed product owners and other managers demand that a team commit to 20 points of work this sprint since their velocity previous sprint or average velocity was 20 points. Velocity is a tool to make reliable commitments, not a substitute for team judgement at making these commitments. Velocity does not imply commitment for the upcoming sprint and it definitely does not imply commitment over the next bunch of sprints.
.
Peering into the future:
Future can not be predicted. However one can arguably say that project release date based on velocity measurements is many times more probable to be true than the probability of releasing on a date arrived from purely educational guess work. I’m not aware of a scientific study that will prove my assertion but I’m confident that the probability of my assertion being true is greater than it being wrong
.
Velocity directly depends on:
Reliable velocity measurement are based on consistent sprint length, same team members, similar product domain, similar product technology and consistent relative estimating. These are the direct cause and effect links with velocity. Changes to any of these factors make velocity unreliable. There are numerous other indirect factors which affect velocity. Understanding these requires understanding the relation between velocity and team.
.
Velocity and team:
The most common misconception is that velocity is an attribute of a team. This is understandable since changes to team members directly impacts velocity. This link of cause and effect is frequently yanked where in team members are changed thus impacting velocity thus re-enforcing our belief that velocity is an attribute of team. When in-fact velocity is an attribute of the system which includes both the team and the organizational environment surrounding the team. An example of organizational environment impacting velocity is evident through the dramatic increase in velocity observed in teams after they are collocated. There are various other organizational factors that affect velocity of a team system. Organizational culture and management’s response to team impediments are the biggest contributors to velocity improvements.
.
Velocity and team productivity:

Velocity is often mis-used to express team productivity. There are two common ways of mis-using velocity to express productivity

  • MisUse 1. Velocity used to comparatively express productivity of one team over that of another team. This is when Team A is deemed more productive than Team B since Team A velocity in story points is greater than that of Team B.
  • MisUse 2. Velocity from previous sprints is used to express relative gains in productivity. In other words if teams velocity in sprint 1 was 10 and then in sprint 4 it is 20 then it is incorrect to state that team doubled its productivity in Sprint 4. Let me share an example of a real team. This team had a consistent velocity in the range of 70-80 story points. In one of their sprints the team created an automation script that effectively made testing data inconsistencies a breeze. Stories that were initially estimated at 8 were now being estimated at 2. The level of effort involved with these stories decreased dramatically and so did their relative estimates. Their total velocity however remained at 70-80 story points. Now you will agree with me that the automation script did improve team productivity however it didnot change overall velocity. This is one example of why velocity is a bad measure of productivity. For an excellent article on productivity; see Martin Fowler’s article here.

Tools & Artifacts , , , ,

Barber Shop - Product Backlog grooming

October 20th, 2008

Most of us have to pay a visit to the scissor man/lady every couple of months. Others who don’t have to or choose not to, I envy you. As a kid, my visits to the barber shop were scary ritual. The thought of someone using scissors, clippers and other sharp pointed tools a few millimeters from my scalp and ears was terrifying. After surviving many close calls with sharp objects I was fairly certain that the worst that could happen would be a couple of cuts, minor scrapes and a hideous hair style. Over the years what gave me courage to go to our neighborhood barber shop was our barber’s technique/skill and relaxed friendly conversation that always ensued at his place. (That and my mom and lately my wife :).

I have not completely overcome my fear of visiting barbershops yet. There is always the possibility of getting bruised or a bad haircut. However I find it reassuring that it is in the nature of my fur to grow back and warrant another shot at presentable appearance. Scrum teams & PO’s that appreciate this emergent characteristic of product backlog find themselves engaging in healthy dialogue during backlog grooming sessions . As a coach, helping product owner & team to groom their backlog I seek to use tools and techniques that foster collaboration, allowing them to acknowledge the emergent nature of product backlog items. I have often found myself playing the role of that friendly neighborhood barber, armed and ready with agile tools to help product owners and teams groom their product backlog.

Collection of techniques

  1. User Story format: (As a [type of user] I want [some goal] so that [some reason])
  2. Three C’s (Card, Conversation and Confirmation)
  3. INVEST model
  4. Special story types - Research, Spike & Tracer bullet

Collection of Tools

  1. Index cards or Sticky Pads (lots of them)
  2. Sticky dots
  3. Sharpies
  4. Poker Planning cards
  5. Whiteboard/Flipcharts.
  6. Scissors

These are some tools & techniques that I find myself applying most frequently. The list above is a basic toolkit. (Good barbers always have a secret stash of innovative experimental contraptions, should the customer feel adventurous

Application of tools and techniques during product backlog grooming is highly contextual and it largely depends on the nature of product backlog prior to grooming session, comfort level of product owner and team with grooming techniques and other external factors that indirectly influence the backlog grooming session.

A well functioning agile team grooms its product backlog, at least once, every sprint to build a professional product that sports stylish curls with hints of highlighting.

Product Backlog ,

My excuse

October 20th, 2008

My wife and I are blessed with a beautiful baby girl. She came to our world in the month of August. Since then blogging has taken a back seat. Over this period I have felt stretched on time to do some meaningful writing. It was my assumption that blogging would be fairly straightforward - A couple of hours each week, and I will be ready to churn out a blog. For me, it was not to be as simple.

When I started back in July, I was not sure Why? - I wanted to blog. After this hiatus I believe I have come to terms with my purpose behind this effort.

My intention with this blog is to journal my activities and thoughts as an agile coach. I hope others benefit from it. Most important for me is to be able to look back at my writings in a future date and reflect at how I have evolved over a period of time. For the last couple of years I captured similar artifacts (notes, pictures, learnings) in physical notebooks, word docs and various other formats which to a large extent are beyond recognition. The thought that my posts may possibly be read by other people forces me to strive for clarity & helps me to be a better communicator. I believe that if at least one other person understands my posts then there is a better chance that I, in future, will understand it myself.

I have now abandoned my quest to be a prolific blogger. I am now comfortable with the notion that my posts will be far and between - there I said it! I hope to make them good enough for the future me

Uncategorized , ,

Sharing Values, a team building exercise

July 15th, 2008

A few months ago, I was facilitating sprint retrospective for a multi-cultural team that had members of different national origins (UK, South Africa, Angola and United States). We started gathering information regarding their first sprint. I realized that this team was struggling to simply get along! We took a step back and created a focus for our retrospective. We elicited the following goal for our retrospective:

During the course of this retrospective, team members discussed how they can improve trust, respect & communication within the team. The fact that the team had individuals from different cultural background was not missed. The crowning moment during the retrospective came when one of the team members said:

“In my culture, we have to be friends, for me to do good work. In the western culture, I have to do good work, for us to be friends.”

These words have echoed in my head for a very long time. His insight has been a great learning experience for me. I have learned to acknowledge and appreciate differences in individual values. But what are these values?

Exercise: Sharing Values

Step I: Identify a pair (optional)

Ask your team members to identify someone within the team that they are comfortable with. Ensure all pairs have been identified and if there are odd numbers of people, the facilitator can pair with the lone individual. Ensure that every one has some index cards and a sharpie.

Step II: I don’t like it …

On a single index card, ask each team member to complete this statement:

“I don’t like it when someone/people …… “

Encourage each team member to write down 2-5 such statements on separate index cards.

I have found that it is easier for us to identify behaviors that we don’t like, especially when we have been at the receiving end.

Step III: Exchange Cards

After everyone is done writing, exchange all your cards with your partner.

Variation: If you have opted not to do Step I, then place all these cards in the basket/hat. Now randomly pick cards from the Basket/Hat. If you get a card that is yours, then place it back into the bucket/hat. Ensure all cards have been distributed.

Step IV: I like it …

On the back of each index card, write down a statement that will counter your partners “I don’t like it …” statement with

“I like it when someone/people….”

You will be amazed how your team member’s insight into your hot-button issue helps you recognize behavior that you will truly appreciate!

Step V: Share Values

Go around the table where each team member reads aloud a statement that begins with “I like it when …”. Take turns reading one statement per team member at a time until all statements are exhausted.

These are your team’s value statements. These statements provide a simple list of positive behaviors that are currently valued in your team.

Caution: As a facilitator/scrummaster refrain from vocalizing these statements yourself. I believe it is very important for everyone in the team to hear these positive behavioral statements from their peers.

Step VI: Team Values Chart

On a Big Visible Chart only capture statements, that begin with “I like it when …”. Radiate this information in your team area for the benefit of your team members and others who interact with your team.

This exercise takes less than 30 minutes to do. Try this exercise again after a couple of months; see how far your team’s values have evolved. As a manager/scrummaster/team member, if you feel tempted to dictate good behaviors to your team, take a deep breath and try this exercise with them. Maybe, just maybe, your team will self-organize to correct its own behavior.

ps: Suggest destroying index cards that were used for this exercise.

Facilitation Exercises , , , , , ,

What is Definition of Done (DoD)?

July 8th, 2008

DoD is a collection of valuable deliverables required to produce software.

Deliverable that add verifiable/demonstrable addition of value to the product are part of the definition of done. Such as writing code, coding comments, unit testing, integration testing, release notes, design documents etc. Definition of done helps frame our thinking to identify deliverable that a team has to complete in order to build software. Focusing on value added steps allows the team to eliminate wasteful activities that complicate software development efforts. It is a simple list of valuable deliverable.

DoD is the primary reporting mechanism for team members.

My favorite agile manifesto value is “Individuals and interactions over processes and tools”. Would it not be effective reporting to say, “Feature’s done”? DoD is a simple artifact that adds clarity to “Feature’s done” statement. A feature or Product Backlog Item is either done or it is not-done. Using DoD as a reference for this conversation a team member can effectively update other team members and product owner. Kindly note that by primary reporting mechanism I do not intend that DoD is the only reporting mechanism used.

DoD is informed by reality.

Scrum framework sets a very high bar of delivering “Potentially Shippable Software” at the end of every sprint. To me, potentially shippable software is a feature(s) waiting on product owner’s discretion to be released to end-users. Teams that are able to release to end-users within a maximum of 2 days can be reasonably said to have their product in potentially shippable state. For such teams: Potentially Shippable = Definition of Done.

For other teams working to achieve potentially shippable state, their DoD contains only a subset of deliverable necessary to release to end users. Such teams have DoD at various levels:

§ Definition of Done for a Feature (Story or Product Backlog Item)

§ Definition of Done for a Sprint (Collection of features developed within a sprint)

§ Definition of Done for a Release (Potentially shippable state)

There are various factors which influence whether a given activity belongs in DoD for a feature or for a sprint or for a release.

The most important is for the team to realistically answer:

Can we do this activity for each feature? If not, then

Can we do this activity for each sprint? If not, then

We have to do this activity for our release!

For activities that cannot be included for a sprint/feature: “Discuss all of the obstacles which stop them from delivering this each iteration/sprint” - (Building a Definition of Done)

Some of the common root causes for impediments that I have observed:

a. Team does not have the skill set to incorporate activities into the definition of done for a sprint or for a feature.

b. Team does not have the right set of tools. (Example: continuous integration environment, automated build, servers etc.)

c. Team members are executing their sprint in mini-waterfalls. Aha! Opportunity to be more cross-functional. Sharing of responsibilities across functional silos.

DoD is not static

DoD changes over time. Organization support and team’s ability, to remove impediments, enables inclusion of activities into DoD for feature/sprint.

DoD is an auditable checklist.

Task break down for a feature/story happens during sprint planning and also within a sprint. DoD is used to validate whether all major tasks are accounted (hours remaining) for. Also, after a feature or after a sprint, is done, DoD is used as a checklist to verify whether all necessary value added activities were completed. It is important to note that the generic nature of the definition of done has some limitations. Not all value added activities will be applicable to each feature since the definition of done is intended to be a comprehensive checklist. The team has to consciously decide about applicability of value added activities for each feature. For example following user experience guidelines for a feature that provides integration point (eg: web service) to another system is not applicable to that particular feature, however for other features within the system that interface with a human being require user experience guidelines to be followed.

Summary:

Definition of done is orthogonal to user acceptance criteria (functional acceptance) for a feature. It is a comprehensive collection of necessary value added deliverables that assert the quality of a feature and not the functionality of that feature. Definition of done is informed by reality where it captures activities that can be realistically committed by the team to be completed at each level (feature, sprint, release).

Tools & Artifacts , , ,