Question and Answers
Questions and Answers from the webinar on Writing Better Features In SAFe
What about capabilities?
Capabilities are not another level in the work item hierarchy, they are a grouping mechanism for Features when a group of Features all need to be done in the same Planning Interval but by different Agile Release Trains within a Solution Train.
By way of example, imagine that you’re an aircraft manufacturer building and aircraft. The Agile Release Train are all manipulating the same solution, the aircraft, but have been arranged around their skills, there’s a mechanical engineering ART, an electrical engineering ART and a software engineering ART. The challenge is that the aircraft needs to be able to “turn left”, each ART will have a feature for doing their part of “turn left”. Those Features all need to happen in the same PI because you don’t want to be the test pilot up in the sky trying to test “turn left” to find out that the electrical ART hasn’t done the cabling, most test pilots I’ve spoken to would consider this bad, really bad! The capability is there to hold the Features together as a group, to ensure that they all go into the same PI. Usually at the pre-planning event the central roles from each ART, the PM, SA and RTE, gather together to ensure that their backlogs are aligned. Any capabilities are split up. Splitting a capability doesn’t require additional work, the Feature that goes into the ART backlog is the same as the Capability but it’s just the bit that this ART can do which is defined by the skills and knowledge of the ART.
Lots of organisations want to use Capabilities to group related Features together, even within the same backlog but you have to stop and ask yourself why? Why do you want to group the Features together? There could be many answers, one might be that the Features are badly formed, they aren’t valuable and releasable in their own right, therefore a set of them is needed to achieve value and the Capability provides that. The real fix would be to rewrite the features as valuable and releasable changes, rather than introducing an unnecessary level of aggregation. Another common reason is that the stakeholders want all of the functionality and are worried that if it isn’t grouped together they might lose some of it. Their concern is valid, but grouping the work together means that the less valuable pieces of the group could be pushing out the more valuable pieces of other work. Are the stakeholders still in the mindset of “it’s done when it’s all done” rather than chasing value?
A friend and colleague of mine has been advising the Square Kilometre Array, a massive scientific experiment with at least 3 very specialised ARTs running. The specialism of the ARTs being around the scientific investigations, i.e. spotting different types of astronomical phenomena in the dataset. They use Capabilities to ensure that any underlying architectural changes are reflected in the work that each ART is doing but the majority of the Features are local to each ART. The Capabilities coordinate the underlying architecture moves forward at the same time, without breaking each of the local scientific domains that are running fairly independently. It’s also worth point out that the Capability doesn’t tell the ARTs what to do, it triggers them to form a cross-ART architectural conclave to work out what to do to make the architectural changes.
Use Capabilities as a grouping mechanism when you absolutely have to, but only when you absolutely have to. Empower ARTs to have their own local features wherever possible.
Any thoughts/hints on how to “build confidence” in teams so they are comfortable with imprecise estimates?
I can feel the worms pushing up the lid of the estimation can…
The first question to pose would be “why are they uncomfortable with imprecise estimates?”, what has happened in the past that is causing this lack of confidence. Do some root-cause analysis, drill down, the lack of confidence is a symptom, there will be some underlying reasons why.
The next step is to make sure that everyone understands what the estimates should be used for and what they’re not used for. Core Value of transparency is require here.
From a team perspective, estimates are used to help the team make sensible commitments. Have you taken on enough work, but not more than you can do. Commitments are made to objectives, the underlying work can keep evolving, changing, as long as the commitment to the objective is met the work and any estimates on them are irrelevant. The team can use insight from doing the work to adjust future estimates, but that’s a closed feedback loop that is entirely contained within the team.
From an organisational perspective, estimates are used for forecasting. Be very clear with the organisation that forecasting will not tell you that you’re going to get a piece of functionality at 4:15pm on the first Tuesday of the fourth Month in 2025, forecasting gives you insight into whether the date, the deadline, looks achievable. If the answer is no, or that it’s a little too close for comfort, then that insight is used to affect the current plans: clear irrelevant work out of the way, organise more budget and therefore staff, move the deadline. There’s lots that can be done now if the organisation knows to do it, do it now because if you leave it until the deadline it’s too late. Forecasts are not a commitment, commitment comes through PI Planning and only to the immediate timebox. Long term commitment is foolish because their is too much happening beyond the organisations control for the commitment to stick, instead it gets replaced with repeatedly forecasting whether the dates are achievable and using that insight to adjust the plans being made Planning Interval by Planning Interval.
This is behavioural change, there are no short cuts to behavioural change, you have to coach the individuals concerned through that behavioural change and that can take a considerable amount of elapsed time. It requires considerable empathy from the coach, those misbehaving managers, why are they misbehaving? what are they worried about? why are they fixated on perfect estimates? why is that pushed on the teams? There’s root cause analysis to be done there, to properly understand the other side of the story.
Where does Use Case documentation fits in?
Whilst Ivar Jacobson the person is the inventor of Use-Cases, and Ivar Jacobson the company is the go-to place for training and support with Use-Cases, this is one of the places where I have to raise my hands and admit that I’m not an expert in Uses-Cases, some of my colleagues have, quite literally, written the book on Use-Cases. Whilst I might not be the expert on how to construct a Use-Case I can explain how they interact with the Scaled Agile Framework…
Use-Cases are very good way of describing your solution intent, they can become the permanent record of what the solution should be doing, without getting lost in all the detail that is really the domain of the code and the tests that validate that the code is doing what it’s supposed to be doing. SAFe Features are not a good way of describing what your system is doing, nobody in their right mind would try to reconstruct the behaviour of the system from the sum of all the changes that have ever been made to the system, they either look at the permanent record and if that doesn’t exist they go straight the code or physical design of the system which is the ultimate description of what it will do, bugs and all!
SAFe Features are tokens that are used to trigger trains and teams to change things. The obvious changes are to the code or designs, but it can also be used to change the Use-Cases. Use-Cases don’t appear fully formed, they evolve over time as part of the evolution of a long-lived solution. Features can trigger trains and teams to update their Use-Cases, in turn the changes to the Use-Cases trigger the creation of Features that will change the code or designs. Depending on the Use-Case a Feature could represent a slice through a Use-Case, if the slice is big then it might just be a piece of the Use-Case slice; the Feature splitting patterns can be applied to break the big things up into smaller, manageable chunks that fit within the timeboxes and can be sensibly prioritised and developed by the Agile Release Train.
Scaled Agile chose to name the work item a Feature because it’s a word that people are familiar with; however, that familiarity can cause problems when people assume that it’s something it’s not. Always remember that a SAFe Feature is a token representing a change the solution, it is not a description of what the solution will do, that can be elaborated, and ideally added to the permanent record, as part of making the change.
What would you suggest on creating value driven objectives for features with challenging domains such as analytics work?
I could have sworn that I discussed this in the series on Writing Good Objectives that Ian Spence and I co-authored, however I can’t find the exact quote so I’ll just recount it here!
My assumption is that the analytics work is experimental work, investigations, and the results of the work can’t be known in advance.
In the past I have had problems with teams struggling to write the objectives because they’re trying to predict the outcome of the experiments so that it can be written into the objective, but that is impossible to do because the only way to know the outcomes is to run the experiments! The trick is to write the objective to state that you will run the experiment, you will gain the knowledge and perhaps describe the decisions that knowledge will influence to show that the knowledge has value; don’t write the results of the experiment into the objective because that can’t be known upfront at planning.
When it comes to assessing the Objective at the end of the PI you can say that the work has been done, the knowledge has been obtained. Whilst you assess the objective at the end of the PI, the knowledge should have been shared earlier, as soon as it’s been gained, so that it can influence the preparation for the next PI.
The functional size of the piece of software you will build matters to make an estimation?
If by functional size you mean the change being made to the software, i.e. a Feature or Story, then yes the size of the change is going to affect the estimation. Bigger changes will get bigger estimates, smaller changes will get smaller estimates. The bigger the estimate, the greater the error margin, for some big things all the estimate tells us is that it’s too big and will need slicing down into smaller more manageable parts at some point in the future.
If by functional size you mean the size of the software being changed by the features, then that shouldn’t affect the estimates for the changes being made, the Features or Stories, they are the changes that they are. There can be some challenges with creating estimates for Features that manipulate large software systems, Features can cut across areas of knowledge about the software system, therefore to create an estimate for the Feature you need knowledge and insight into all of the areas being manipulated. I always suggest that Feature estimation should be done by a group of engineers facilitated by the System Architect. The droop is drawn from the Agile Release Train and should cover all of the technologies, disciplines, etc…, ideally it’s 5-9 people in size because we know that that’s a good size of collaborative working and discussions. Collaboratively they come up with the Feature estimates and the System Architect can carry those estimates, and knowledge from the discussions, into the </a href=”https://www.ivarjacobson.com/publications/blog/wsjf-and-feature-slicing”>Weighted Shortest Job First. Centralised estimation, which Feature estimation tends to be, is fine for the purposes of prioritisation and forecasting, but anywhere that commitment is made the estimates need to come from the people making the commitment. Commitment in SAFe is made in PI Planning and the estimates on the stories have come from the Teams, the people making the commitment.
SAFe also includes the concept of MMF, but the way you explained features now seems to be the same as MMF. How should we think about it?
I would suggest that Features in the backlog, tokens to get work done, want to be the smallest they possibly can. Think Lean Start-up, what’s the minimum we can do to provide the viability of this product, or this feature idea. Then incrementally grow out from there. Some “product features” as the customer perceives them might be comprised of many SAFe Features to build up to the desired functionality.
If you’re thinking small, potentially releasable increments, then you are in a position where they can be released. Perhaps not externally, but internally to test that it works with adjacent systems. Perhaps released to friends and family to solicit feedback, gain knowledge about how to steer the teams towards successfully achieving the big dream. Dividing the big dream up into smaller increment is also useful because you might release that you don’t need everything you dreamt of in order to achieve the value of the dream. The valuable bits of the dream can be released early and can generate value for the organisation, the nice-to-have pieces may never get done, because the most valuable bits of the next big dream have arrived and win at WSJF. This is a lot easier if the organisation has a mindset of chasing value, if it’s mindset is “do all the work” then it will struggle with the fact that nice-to-have bits haven’t been done, even if they’re not valuable. Many organisation fall into the “do all the work” mindset because tracking work is easy, whereas tracking value requires real effort to analyse the sales results or parse customer feedback. A significant part of the Epic Owner and Product Manager roles is doing the hard work of tracking value to gain insight into what work should be done next.
Regarding building confidence, is part of it building up an organizational muscle to accept a little ‘waste’ along the way in order to maximize the benefit later? What I mean by that is that many orgs view things that may result in learning, are initially viewed as ‘waste’. Thoughts?
It’s not about building up acceptance for waste, if anything waste the goal is to minimise the amount waste, that’s Lean pure and simple.
The acceptance that needs to be built up is that “not all work has direct end-customer value”. Learning is valuable, it positions the organisation for future challenges, it provides information and insight so that the organisation can do the right-thing in the future. When describing work that has internal value, be it Features going into PI Planning or Objectives coming out of PI Planning, explain what the value to the organisation is. What decisions does the knowledge support. Too many teams just describe what they’re doing rather than why they are doing it, and the why is needed to allow the wider organisation to understand the value the work provides.
There always needs to be a balance between internal value and external value, that’s where capacity allocations kick in. It’s often simplistically miscategorised as architecture vs business, but that’s not really the case. The internal capacity is for whatever the train or teams know they need to do to prepare for the future, of which learning and upskilling can be a part. The external capacity is for the business requests, the external, end-customer value. If a business request needs some preparatory work that doesn’t come out of the internal capacity reservation, it’s comes out of the business’s capacity reservation because it’s for them. Regardless of which capacity reservation it’s come from, it all runs through the WSJF first together, the capacity reservation is there to allow the train and teams to bring work to the WSJF discussion. This means that the internal work is competing against the external work so, as described above, it must present itself in terms of the value it brings in order to allow it to be ranked against the external work.
Back to Writing Better Features In SAFe