On The Nature Of Portfolios
“Always show your working out!” was the mantra of my maths teacher in senior school. This series of blog posts “On the Nature of Lean Portfolios” is an exploration of Lean Portfolios. It is the thought processes running through my mind, exploring the possibilities so that I understand why things are happening rather than just doing those things blindly. It is not intended to be a fait-accompli presentation of the Solutions within Lean Portfolios but an exploration of the Problems to understand whether the Solutions make sense. There are no guarantees that these discussions are correct, but I am hopeful that the journey of exploration itself will prove educational as things are learnt on the way.
Epic Flow does it matter?
As mentioned in the preamble; these posts are experiments and explorations, and because of that they don’t always turn out how I thought they might when I first sketched a title down on paper.
This post came about because, whilst working with a client, the client had imposed an arbitrary “no more than 3 PIs” rule on the size of their Epics. The rationale behind the rule was that in the past they had been burnt by multi-year projects running for multiple years and not delivering; by setting a limit at 3 PIs they would be forcing value deliver to occur more regularly. The challenge however is that some of their work is going to take multiple years to complete, breaking it into 3 PI sections meant that sight of the overall value was being lost. The fix for that is to instigate a grouping mechanic as discussed in the previous blog; this creates a new challenge in that the grouping mechanic starts to look a lot like the old multi-year projects and they started wanting to attach funding to it, the work wanted to employ its own people, the old world reasserting itself with some new terminology and next to no behaviour change; a fight to hold them to the transformation that they were after.
What kept returning to my mind was: as long as you have good metrics that are being regularly evaluated to steer the Epic then, even if the Epic is slow moving, it’s not a problem. To that end I started to construct an example…
Arguably the epitome of Lean Flow is the automotive industry, those continually moving production lines and the ballet of hundreds of workers, robots and machinery dancing around assembling the vehicles.
Pure, mesmerising, flow.
However, step back from the production lines and look at the Portfolio, the type of vehicles that each manufacturer is producing. Each manufacturer has a portfolio of just a handful of vehicle types, each type might be massively customisable in terms of colour, interior fittings, engine, etc… but they’re still the same type because each type needs its own unique factory. What’s more that Portfolio of vehicle types changes very, very slowly, a new type appears every few years; another disappears as consumer tastes change.
Does the portfolio have flow? If it does it’s glacial flow.
Does it matter? Despite the perfect storm of the demise of hydrocarbon fuels, robotisation of driving and the coronavirus pandemic, the car manufacturers are still fairly successful companies.
Except my Automotive example is wrong. The Production Lines and their product, the vehicles, are part of the Organisational Hierarchy1. The R&D department is the set of Development Value streams that enact changes to affect the Organisational Hierarchy by evolving the solutions that the Organisational Hierarchy uses, in this case the solutions being manipulated are the production lines themselves and designs of the vehicles those production lines are manufacturing. There can be a flow of Epics across R&D that are continually updating the vehicles, e.g. new interior fitting, upgraded drivetrains or more efficient manufacturing processes; all without affecting the actual Portfolio of vehicles.
A bad example but, invoking scientific method, we can learn as much from a failure as we can a success. That the slow moving things might be manipulated by the change rather than being the change itself is a useful learning point. However, it doesn’t solve the question initially posed.
Flow needs to match Feedback
This blog posting almost ended at the line above and was about to be consigned to the waste bin when in the previous posting I wrote:
|“Epics need to be sizeable enough so that they have the opportunity to have decisions made around them; otherwise the opportunity to decide if this is valuable occurs only at the point where the Epic is approved to consume investment.”|
Which set me thinking about flow again; the Flow needs to be appropriate for the level of the system. The rate of Flow needs to match the feedback cycle that level can support.
If the work is flowing through faster than the rate at which the feedback cycle is capable of providing proof that the work has generated value then it’s just work being done rather than value generation. The information that could steer the work is being gathered quickly enough to provide the steering. This triggered an old memory about Sampling Theory and the Nyquist Fequency.
I’m not going to pretend that I’ve worked through all the maths in detail, I haven’t, but the short answer is that in a discretely sampled system then frequencies higher than half the sampling rate start to appear as aliases. If Epics become too small, the rate at which they are flowing is greater than the rate at which the Portfolio is sampling and processing information. The Portfolio process starts to fall apart because the Epic has been completed before the Portfolio has received information that could affect it’s steering.
Why is this a problem?
You can’t cancel anything, the work will have completed before the Portfolio gets a chance to decide. If you can’t cancel anything you can’t cut your losses and transfer the effort to something else. Every Epic will do all of it’s work, there is no “official” opportunity to stop earlier because you’ve achieved the business outcomes.
So what? Just don’t approve things!
Which puts a lot of onus on the approvals process, which is the point at which there is the least information available and the information that is available is the least reliable. It’s easy to make the mistake of adding more process to try and improve the information available in the Lean Business Cases and before long that Lean Business Case has started to lose it’s lean-ness. What drives the desire to increase the detail is that the risk has been condensed into the approval point rather than being distributed across a number of feedback cycles that could catch any bad approval decisions.
Just sample more regularly! Increase the frequency at which metrics are gathered and decisions are made.
Yes, you should be gathering metrics more regularly; I’d advocate that an Epic should be updating it’s metrics every sprint/increment; but not all metrics are within the control of the Portfolio.
Whilst it might be possible to increase the sampling of metrics that are within the Portfolio’s control, some metrics such as customer responses will occur at the rate that the customer wants to respond. Also, “One sample point does not a statistical analysis make”2, the trends may take time to appear within the data; that time can’t always be condensed. You might want to run faster but your data might not let you.
Hitting The Right Level
There are two key criteria for work being classed as an Epic
- Does not fit within a Program Increment;it will take multiple Program Increment to complete
- Cross-cutting, requires work from more than one Development Value stream
If the work within an Epic is less than a Program Increment then should the work really have been done as an Epic? Could it have been negotiated into Trains or Teams as a Feature? Epics have their own burden in terms of preparing the Lean Business Case, if the Epics get too small then that burden could outweigh the value being returned. Approvals meetings turn bad due to the sheer volume of little things that need approving.
If all the little change are cutting across multiple Development Value Streams then that is an indication that the setup of the network of Development Value streams isn’t aligned with the work and hence the generation of value. Rethink the Development Value Stream network.
This wasn’t the post that I expected when I first sketched the title several weeks ago.
What started out as an argument about “Why can’t Epics be big?” changed to being “What happens when Epics are too small?” Epics can become too small; when they do the processes start to fall apart. Trying to game the WSJF means that organisations can end up in a race to the smallest without realising that they’ve destroyed the Portfolio’s ability to function.
There’s more to explore with Epics, particularly their lifecycle, but to get an appreciation of the lifecycle the next few posts are going to have to detour into budgeting and the murky world of money.
#1 SAFe follows Kotter’s approach to a Dual Operating System for an organisation. My fellow Fellow Ian Spence talked about this at the 2020 SAFe Summit, a recording of which is available here.
#2 From One Swallow Does Not A Summer Make.