Back to top

Writing Good PI Objectives - pt4

The fourth in a short series of articles on crafting effective, well-formed objectives as part of the SAFe® Program Increment (PI) / Big Room Planning activity.

The series will cover:

  1. Why do we need PI Objectives when we have Features?
  2. Writing good PI Objectives
  3. PI Objectives and the PI Planning Process
  4. PI Objectives Beyond PI Planning: Reaffirming and Monitoring Your Commitments

4.1 Abstract

The previous articles have focused on the evolution of good PI Objectives; in this final article we look at how they are utilised during execution of the PI.

We will follow the sequence of activities undertaken by a team during the PI starting with role the PI Objectives play in Iteration Planning and concluding with how they help to close the PI as part of the program level Inspect and Adapt.

4.2 Team Alignment / Iteration Planning

During the PI the teams work in series of time-boxed iterations. Regardless of whether the team choose to base their team process on Scrum or Kanban they should summarise their objective for the iteration as a set of committed Iteration Goals; these are a more generalized version of what Scrum defines as a Sprint Goals.

The PI Objectives provide context, and an obvious source, for the team’s Iteration Goals. Typically, a team will take one or more of their PI Objectives to directly act as their Iteration Goals. This gives the team a focus and helps steer them towards selecting the set of stories that will deliver the desired value. Focusing on just one or two of the PI Objectives at a time also helps keep Work-In-Process limits low.

During the Iteration planning process, the scores assigned to the PI Objectives can be used by the team to help make sensible decisions. It is important to remember that the score assigned to a PI Objective is a relative summary of its business value. This is one of the inputs used by the team when deciding what to work on. To plan the iteration the team needs to consider many other factors such as effort, risk, architectural impact, availability of people and resources etc. The team’s purpose is to optimize the value delivered and it may make sense to deliver a number of smaller less-valuable items than to attempt to address a single very large, time-consuming item of equivalent or higher value.

In some cases, a PI Objective will take more than a single iteration to achieve; in this case the Iteration Goal should show some measurable progress towards the objective.

The advice in the earlier blogs on writing good PI Objectives also applies to writing good Iterations Goals. If the PI Objective is being used directly as one of the Iteration Goals, then it is already well-formed and can be used as is.

4.3 Continuous Management Information / Iteration Review

The PI Objectives provide an excellent way of communicating what’s going on to upper management:

  • they’re written in human readable language
  • they’ve seen them before – they are what they accepted as part of the Final Plan Review
  • they focus on value
  • they represent the tangible commitments made by the team(s)

The best things about them are that they 1) allow us to see what has been achieved 2) enable us to keep the business owners engaged throughout the PI and 3) they allow us to regularly reaffirm the team’s commitment.

Working with a number of our customers we have developed a very simple “commitment tracker” for use by the individual teams. This is usually implemented as part of each team’s public wiki pages.

It involves the publication of each team’s PI objectives and their tracking across the Iteration in the PI. At the end of PI Planning it would look like Figure 1:

Figure 1: The Initial Commitment Tracker

Figure 1: The Initial Commitment Tracker
(click to enlarge)

Here the objectives are listed in the first column – usually these would be the full objective with clear measurable results but here just a tag-line is shown to keep the illustration simple. The second column shows whether the objectives were stretch objectives or fully committed ‘core’ objectives.

The final two columns show 1) the planned Business Value and 2) the actual Business Value achieved. We’ll discuss the scoring of the objectives in more detail in the next section. As Figure 1 shows the table at the end of PI planning there are no actual scores to be shown.

The middle six columns are used to show the team’s original and on-going commitments using a simple colour code.

Colour not started
GreenCommitted – the team fully expects to achieve the objective
OrangePlanned – the team believes they will be able to achieve this objective (if everything goes to plan), but it is at risk and may not happen.
RedNot Happening – the team is no longer committed to the achievement of this objective within the PI.

As seen is Figure 1 it is very easy to see which of the objectives the team has fully committed to. Figure 2 shows the same team’s tracker at the end of Iteration 2.

Figure 2: The Commitment Tracker at the end of iteration 2

Figure 2: The Commitment Tracker at the end of iteration 2
(click to enlarge)

Here an additional adornment has been used to show when an objective has been achieved and the team has therefore completed its work on it. In this case we have used a thumbs-up, but a simple tick would work just as well.

Now we can clearly see how the team is progressing and how its plans are changing based on what has been learnt from their first two iterations. The team has completed two of its objectives and has realized that the first of the stretch objectives is not going to be completed during this PI and that the fourth objective is now at risk.

Figure 3: The Commitment Tracker at the end of the PI

Figure 3: The Commitment Tracker at the end of the PI
(click to enlarge)

The use of the ‘commitment tracker’ clearly communicates to the Business Owners and everyone else involved how the team is progressing providing a great basis for value focused conversations and decision making.

Teams’ indicate the state of their own objectives, which can in turn be aggregated to the Program Objectives. Typically, all teams need to have completed their contributing objectives for an aggregate objective to be marked as complete. If a team changes its level of commitment on an objective that contributes to an aggregate objective then this can be reflected in the commitment shown on the shared objective.

The commitment tracker is completed as part of a team’s Iteration Review. This reaffirms the team members commitment, makes sure they don’t take their eye off the ball and forget the things they committed to in PI planning, and helps prepare them for the next Iteration or PI planning meeting.

This information is typically broadcast within the program, upwards to business owners, and across to other teams and trains. If anyone wants more regular updates, or a finer granularity of information than this broadcast provides, then they need to start investing time and effort to attend the team level events and to learn how to utilise the team level tooling.

Useful Trick:
You can also use the tracker to indicate when PI Objectives are pulled to form Sprint Goals and which objectives are being worked on. By adding an arrow symbol to indicate when a PI Objective is being focused on and a stop sign to show when something is blocked, we can see when work is planned and started on a specific objective. This does require the tracker to be updated as a result of Iteration Planning as well as the Iteration Review.
Figure 4 shows our example extended in this way to show the results of the Iteration Planning for the first and third iterations as well as at the end of the PI.
Figure 4: The Commitment Tracker with progress and blocked added

Figure 4: The Commitment Tracker with progress and blocked added
(click to enlarge)

With the application of a little more discipline around the planning of the objectives, it’s very easy to spot infractions of our Agile Principles; namely SAFe® Principle #6: Visualise and Limit Work-In-Process. If the team plans to start working on all the objectives in the first Iteration then something is wrong.

We have found that using the PI Objectives in this way, as an integral part of the iterative process, helps keep everyone focused, increases business transparency, enables increased business engagement, and ultimately helps fight the entropy that leads to teams becoming an un-thinking Feature or Story factories rather than value delivery teams.

4.4 Handling Change / Backlog Refinement

Change is inevitable and Agile Manifesto Principle #2 is “Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.”

As change happens and new work comes in; other work will need to be removed to make space for it. If the new work is large and not related to the existing objectives, then new Objectives will need to be put in place and existing objectives down-graded from Committed to Planned or Planned to Not Happening. We are dealing with a fixed capacity. We cannot add new objectives to the train without removing some of the existing ones.

Obviously, any new objectives will need to be scored and shown to have a greater urgency and value to effort ratio than the ones they are replacing. Remember you cannot prioritize based on value alone so, just as it is for Features, it is not always the most valuable objectives that make the cut.

Any new objectives will be treated as stretch of objectives when being added to the table, even if they are replacing one of the team’s original core objectives. We need to retain the initial baseline set of objectives to support the predictability measure at the end of the PI.

Not all new Features and Stories requested will require new objectives. As seen earlier in the series we will have objectives related to maintenance and support, many of which will not have specific sets of Features or Stories identified up front for them. The Product Management Team will be constantly refining the Program and Team backlogs based on the current PI Objectives and the team’s focus. The first question to ask when receiving any new Features or Story requests is “What does this mean for the current PI? How does it affect our current PI Objectives?”. Once again having the information on the state of the PI Objectives, in the form of the ‘commitment tracker’, is incredibly helpful when managing the backlogs.

Useful Trick:
Whilst we welcome change, too much change is indicative of other problems within our system; namely the Content Authority (the business) can’t work out what it wants. Whilst we can show that we are still predictable to the evolving plan, a “churn” metric to illustrate how much change occurred during the PI is often a useful indicator to reflect back towards the Content Authority. If the churn rate is too high, then the Stakeholders and Business Owners that suggest work need to stop and reflect on why they can’t stabilise their demands for a PI.

4.5 Closing The Loop / Scoring The Objectives

The Objectives and the scores attached to them during PI Planning are used to close the loop on commitment at the end of the Program Increment. This serves two purposes; it acts as a forcing function to generate more conversations between the Business Owners and the Agile Teams and it generates a predictability metric. We’ll cover the Predictability Measure in the next section.

It’s at this point that SAFe®’s phrase “Business Value” can be potentially troublesome. What we don’t want is the Business Owners to come along and say, “thanks for delivering all that we requested; now that we’ve seen it we realise that it’s not valuable; therefore we’re going to score this as 0.” If the team delivered then the team should get the full score. What we are looking at here is whether the teams achieved what they set out to do NOT how good the Business Owners are at ‘guestimating’ the value of the individual objectives. We also want to avoid the unpleasant situation where teams try to up the value of the things they’ve done to cover up for the things that they haven’t done.

It is also important not to revise / re-work the originally assigned value as we want to use the numbers to drive a predictability measure – a measure that won’t work if we fiddle with the inputs.

In the example above (Figures 1 to 4) you may be wondering “Why are the objectives not being scored as and when they are completed?” Well, this is a sad a reflection on the lack of involvement from most Business Owners. In an ideal world the Business Owners would be actively involved throughout the PI and could score any objectives completed in an iteration as part of that iteration’s System Demonstration.

Useful Trick:
Don’t organize your system demo’s around teams or Features – organize them around the shared and individual team objectives. Encourage your Business Owners to attend and score any completed objectives as part of the demonstration. This has many benefits including increasing Business Owner engagement, more relevant demos and, most importantly, keeping everyone focused on achieving their objectives rather than becoming an un-thinking Feature or Story factory.

If the Business Owners are not engaged enough to ‘accept’ the objectives in this way, then have Product Management accept the objectives as they are achieved, and the Business Owners score them after the PI System Demo as part of the Inspect and Adapt.

Having seen the appropriate System Demonstration the Business Owners discuss with the team how much of each Objective was delivered; this should be a two-way discussion. Sometimes teams have as much of an opinion on how much they achieved as the Business Owners; they were doing the work after all therefore they know how much they’ve done. This discussion also facilitates knowledge transfer; the engineering staff learn what and why certain things were important to the Business Owners and the Business Owners start to learn what challenges the teams are facing so that they can prioritise future features and enablers accordingly. And everyone learns how to write better objectives next time – if there is a lot of debate over whether or not an objective was achieved then it probably wasn’t very well formed in the first place.

Sometimes the PI Objectives are black or white, there or not-there. In this case if the objective was achieved then it gets the full score that was given to it in PI planning, otherwise it gets 0. In other cases there is more of a sliding scale and the team can be given some credit even if all the desired results were not achieved. Take for example an objective to speed something up by 50%. This would make a good measurable objective but may be very difficult to achieve, with the team, for example, only achieving an improvement of 30%. It is up to the Business Owners to assess the value of this increase in speed. It is less than they hoped for but still of significant value. If the original score was 10 the Business Owners may well give the team 8 out of 10 in this case. It will be up to them to judge the value delivered and score the objective appropriately.

We should also be careful not to overthink or overly quantify what we mean by value. As discussed in the previous article this is a score that reflects the relative importance of a team’s objectives to the Business Owners at the time the plans were made. It is not an absolute value and is not comparable between teams. The last thing we want is the Business Owners to come along and say, “thanks for delivering all that we requested; now that we’ve seen it we realise that it’s not valuable; therefore we’re going to score this as 0.”, if the team delivered then the team should get the full score.

Any new objectives should be scored in exactly the same way as the original objectives. This is why it is so important that the Business Owners are involved in the change process and agree the value of any new objectives before the team commits to them. Any objectives that were abandoned because their work was moved out to make room for new objectives will score 0.

What happens if the team come up with, and deliver on, some of their own objectives during the PI and don’t involve the Business Owners? Well in this case they provide us with a fantastic learning opportunity. They should definitely be presented to the Business Owners for scoring as part of closing out the PI. Our goal is full transparency and it is always interesting to see how the involved parties react. ☺

4.6 Closing The Loop / The SAFe® Predictability Measure

The number one metric referenced in all the SAFe® training material is the Program Predictability measure (https://www.scaledagileframework.com/metrics/#P2). This is the only measure that gets specifically called out and illustrated in the Leading SAFe and Implementing SAFe courses.

The reason that it is such an important measure is that predictability is an essential precursor to any experiment-based approach to improvement. If the system is unpredictable how will you be able to interpret the impact of any changes that you make to the team’s way-of-working.

The predictability score is calculated by the sum of the actual value achieved as a percentage of the planned / committed value - the sum of the value assigned to the core objectives by the Business Owners during the planning event. The stretch objectives aren’t included in the calculation of the planned / committed value, but their actual scores are included when calculating the teams actual score.

This means that teams can score greater than 100% if they manage to deliver their stretch objectives. Now this is not a problem but it is also not the objective. Any team that is consistently delivering over 100% is probably declaring too many of their objectives as stretch objectives; are they trying to make themselves look good by gaming the system rather than highlighting that there is true risk inherent in those objectives? As the predictability score uses the values assigned for each objective (and doesn’t just count them and treat them all equally) it is also naturally weighted towards the PI Objectives that the Business Owners felt were most important – the ones they awarded the highest scores.

Any new objectives that were added once the PI was started are treated in the same way as the stretch objectives when calculating the scores. The added objectives don’t contribute to the planned score because they weren’t part of the committed set, but they do contribute to the actuals at the end. The calculated predictability score shows that the team is still predictably delivering even to an evolving plan.

This image from the Scaled Agile Framework succinctly sums up how the predictability measure is presented and how the overall program predictability is derived from that of the individual agile teams.

Figure 5: Program predictability measure, showing two of the teams on the train and program (cumulative)

Figure 5: Program predictability measure, showing two of the teams on the train and program (cumulative)
(click to enlarge)

The desired operating band for the predictability measure is deliberately a range, typically 80-100%. This is shown as the green band on the graph in Figure 8. The fact that it is a range is important; it gives the teams some room for manoeuvre. If they were tasked with achieving an absolute value then there would be a tendency to manipulate the numbers to exactly achieve that value whereas when trying to land within a range the truth can shine through. The range also provides space to absorb the variations inherent in our delivery processes.

The results provide lots of food for thought; individual teams might be out of range, get beyond the numbers and find out what problems have afflicted the teams. It might not be a bad team; it might be that the team sacrificed themselves (for example they took all the incoming defects) to allow other teams to maintain their predictability. The key metric, the one to broadcast outside the Agile Release Train, is the aggregate metric for the train as a whole. If that Predictability score is in the 80-100% region then the Business Owners can start to trust that 4 out of 5 things committed to at PI Planning will come out the end of the Program Increment; perfection is impossible because teams are dealing with problems that have never been solved before and some of those problems might not have economically viable solutions.

Measuring Predictability and Commitments Achieved NOT Value Delivered
It’s at this point that SAFe®’s use of the term “Business Value” can be potentially troublesome. The true business value delivered is an emergent property of the system and not something that can be attributed to the delivery of individual Features or the achievement of individual objectives. The metric being calculated is not “Value” but “Predictability” to the committed plan. Value is a separate measurement; typically measured outside of the train, the Benefits Hypotheses of the Epics are a good starting point for measuring value.
Figure 6: The 4 dimensions of ART metrics

Figure 6: The 4 dimensions of ART metrics
(click to enlarge)

4.7 Final Words

Over the course of this blog series we’ve explored all aspects of the generation and use of PI Objectives in SAFe®. We have seen why they are such a vitally important part of the framework, how they are generated, and how they are utilised within execution and closing the loop on commitment to a plan.

We have seen how they are an integral part of every one of the management activities in the framework from PI and Iteration Planning to Backlog Refinement and Inspect and Adapt, When used properly they prevent us from degenerating into a feature factory, enable the empowerment of the teams, bridge the gap to the sponsors and other business owners and, possibly most importantly, provide the predictability that is needed to support our on-going, relentless improvement.

We’ve also tried to share our practical experience gained across through many years and many PIs in many different organisations, and to tie everything back to the underlying principles that are the guiding force within agile and SAFe®.