Contact

On The Nature Of Portfolios - Portfolio WSJF - Part 5

On The Nature Of Portfolios

“Always show your working out!” was the mantra of my maths teacher in senior school. This series of blog posts “On the Nature of Lean Portfolios” is an exploration of Lean Portfolios. It is the thought processes running through my mind, exploring the possibilities so that I understand why things are happening rather than just doing those things blindly. It is not intended to be a fait-accompli presentation of the Solutions within Lean Portfolios but an exploration of the Problems to understand whether the Solutions make sense. There are no guarantees that these discussions are correct, but I am hopeful that the journey of exploration itself will prove educational as things are learnt on the way.

Portfolio Use of Weighted Shortest Job First

The standard advice within SAFe is that when it comes to prioritisation is use Weighted Shortest Job First (WSJF). It is a useful simplification to stop Cost-Of-Delay discussions getting unnecessarily complex, but there is more depth to the mechanic than people would assume from reading the SAFe webpage; depth that I’ve previously explored in a blog on The Subtleties of Weighted Shortest Job First.

Weighted Shortest Job First works at the Release Train level because the timeboxes, the Program Increments, exist. The challenge is that at the Portfolio level, where Epics can last for long periods of time, then I suspect that the “Shortest Job First” part of “Weighted Shortest Job First” is going to cause some problems. By its very nature going it’s to make it difficult to balance Short Term wins against Long Term investment, with the long term investment losing. Part of the reason for writing this blog series On The Nature Of Portfolios was to explore this very issue.

What follows over the next few postings are a series of “experiments” to explore the topic and look at the issues that arise. We’ll look at:

WSJF, Consider the Risk

It’s all a bit one dimensional.
It doesn’t really account for the risk that
those profits might not happen!

Ian Spence, SAFe Fellow

Why not explore whether some of the techniques from Risk Management might help? After all; Portfolio Management, at a certain level of abstraction, is effectively risk management. Managing the risk that what we’re doing might not get the outcomes we desire. Lean Portfolio Management techniques as promoted by SAFe tend to fare better at this because, when done right, they’re focusing on Outcomes rather than doing the work; thereby mitigating the risk that doing the work won’t achieve the outcomes.

Risk Factors


Classic risk management considers both Impact and Probability. If there is a high probability that the risk will manifest and a high impact when it manifests then it makes sense to actively work to do something about it. If there is a low probability that the risk will manifest and a low impact when it manifests then it might not make sense to actively manage the risk, the costs to manage might outweight the cost of damage.

WSJF assumes that the stated outcome will occur, there is no consideration of the probability of it occurring or not occurring. What if the algorthim factored in the probability of an outcome occuring?

WSJF using Total Epic Effort with Outcome Probabilities

A fourth experiment where the Cost-Of-Delay contributions are adjusted for the Outcome Probabilities and the Total Epic Effort is used as the denominator. The standard table has been supplemented wtih probability columns for each of the contributors to cost-of-delay. The Cost-Of-Delay is now calculated by multiplying the contribution with it’s probability and then summing togther.

Epic BV TC RR|OE CoD Effort WSJF
Product S 13 20% 1 100% 8 100% 5.9 1 5.9
Product L 8 90% 1 100% 1 100% 2 5 1.84
Regulatory
(Swarm)
21 100% 1 100% 1 100% 23 8 ~3
Enabler
Setup
1 100% 1 100% 5 80% 6 5 1.2

Table 1: WSJF using Total Epic Effort & Risk Factors


The numbers going into a WSJF can be very subjective, so some notes on the thinking that led to the above:

  • Product S is risky; it’s got a 20% probability that it will produce it’s outcomes and that has reduced it’s score. It is still winning but not by such a significant margin as in previous experiments.
  • Product L is already established there is high confidence that it will produce it’s outcomes but there is still a chance that the chosen functionality doesn’t quite meet customer expectations; hence the 90% probability that it will deliver value.
  • Regulatory will deliver it’s value if the work is done, 100% probability that value will be delivered.
  • Enabler there is a slight chance that the chosen architecture won’t work therefore RR|OE has an 80% probability of delivering.

Even though Product S is risky which decreases the Business Value contribution. because it’s Effort is small compared to the others it still wins. Which is to be expected; effort is still the dominant factor. The effort of the other items to be reduced to bring them inline with Proudct S’s effort.

Observation: Risk Mitigation

This provides a sensible mechanic for thinking about “What experiments need to be run?” The Epic Owner should be looking at experiments that increase the probability of future parts of the Epic; i.e. De-risking the Epic.

Some advice on how to determine the probability scores:

  • Business Value: The probability that the business value will be achieved. The challenge here is that people are notoriously bad at estimating risks and probabilities. I wouldn’t be surprised to here an executive state “My exceedingly risky idea; I’m 100% certain that it’s going to make all these profits!” A collaborative approach should help because the group will balance out the more irrational individual contributions.
  • Time Criticality: How fixed is the deadline or the urgency? External deadlines imposed by a Regulatory Body that are non-negotiable would be 100% fixed therefore 100% probability that the date is correct. Internal self-imposed deadlines are much more flexible and could score less. Note: always remind people that saying something is 100% fixed isn’t a guarantee that it will be done by this date; it’s just that the date is non-negotiable when it comes to planning.
  • Risk-Reduction | Opportunity Enablement: What’s the probability that the work it going to deliver an outcome? Be careful with experiments; the probability is whether the experiment will complete to produce a result, it is not trying to predict which result the experiment will produce. Low probability here is an indicator that the experiment can’t be run; typically, due to lack of resources. This then becomes recursive; we should do a de-risking exercise to ensure we do have the resources.

Observation: There must be at least one 1 in each column

Weighted Shortest Job First is deliberately set up to give each column equal weighting and it does this by insisting that each column has at least one 1 in it. Although the Outcome probabilities could be reducing scores still further as long as there is still at least one 1 in the value parts of the BV, TC and RR|OE then they are equally weighted.

WSJF using Experimental Effort with Outcome Probabilities

A fifth experiment, within this timebox (Program Increment) what experiment or new functionality does this Epic want to run to provide the metrics to justify its continuation with the Cost-Of-Delay contributions adjusted for outcome probabilities.

Epic BV TC RR|OE CoD Effort WSJF
Product S
Cust. Survey
1 100% 1 100% 8 100% 10 0.2 50
Product S
Prototype
12 30% 1 100% 1 100% 5.6 0.8 7
Product L
Feature 1
3 90% 1 100% 1 100% 4.7 1 4.7
Product L
Feature 2
2 80% 1 100% 1 100% 3.6 1 3.6
Regulatory
(Swarm)
21 100% 1 100% 1 100% 23 2 11.5
Enabler
Analysis
1 100% 1 100% 5 80% 7 1 7
Enabler
Setup
1 100% 1 100% 1 100% 3 4 0.75

Table 1: WSJF using Experimental Effort & Risk Factors


Compared with the Experiment 3’s Experimental Effort Approach from the earlier blog, the development of “Risky” Product S has been pushed further down the priority stack whilst the experiment to de-risk Product S by running a customer survey is highest priority. The important regulatory work is starting to get prioritised.

In my mind at least it’s starting to feel like there is a better balance between Short term gains and Long Term investment.

WSJF, Conclusions

All models are wrong, but some are useful

George E. P. Box

Weighted Shortest Job First can be improved by reframing what is being Prioritised. This reframing into “which feature set or experiment from an Epic?” still falls in line with our Principles and Lean-Startup thinking.

The reframing does need to be treated with care to avoid falling into bad habits. If Work-In-Process limits are not established and honoured, then the process will lapse into every Epic getting a little bit done and everything being done in parallel, so everything is delivered late.

Weighted Shortest Job First just prioritises, other mechanics are explicitly needed to deal with Cancellations and those will need to draw upon other metrics within an Epic to justify the Epics continued existence.




As we’ve seen with the Risk Factors experiment the algorithm could be refined, possibly improved, but adding more and more parameters is just a case of dimishing returns; the effort involved in calculating more parameters will far outweigh the improvement in the results. Perhaps most importantly, WSJF isn’t “The Answer”; it’s a tool for framing conversations; “What should we prioritise?” Those conversations agreeing priorities are more likely to stop an organisation from imposing “It’s all got to be done and it all has to be done now!” mentality which will inevitably result in overloaded teams and eventual failure.

Throughout the last few posts we’ve highlighted that Weighted Shortest Job First is just a prioritisation mechanic and other techniques and activities need to accompany it as part of the set of Portfolio Processes. In the next few posts we’ll look at the feedback cycle of Epics and put the case for and against splitting large Epics.

I revisit the topic of WSJF for a Part 6 to this series.

Contact Us