Categories
Blog Blog posts in English

Where might the Black Swans be lurking?

Summary: Last year in September, I spoke at Anna Royzman’s New Testing Conference “Reinventing Testers Week” in New York about testing in Black Swan domains. The title of the talk refers to Nassim Taleb’s book “Black Swan” and concerned testing in contexts where improbable risks can have disproportionate effects. This blog contains an invitation to a peer conference in New York on the subject Sunday April 30th.


Sometimes things happen which appear to be beyond “the possible”.
This awareness haunts us in testing: We aim to get those important bugs to materialize. We want to qualify real and serious risks. Yet, our stakeholders have to accept that no matter how much testing is done, we cannot cover everything.
Logically, testing has to be focused on what is important to the business, and what might go wrong in reality. To me, that is at the core of risk based testing.
But saying that is one thing; doing it in reality is quite another. Certain risks seem almost impossible to qualify through testing. How should our stakeholders interpret the absence of clear testing results, for example, when we are trying our best to dig out quality information about quality? Could there be a serious problem lurking? The thought may seem paranoid, but experience shows it is not.
Years ago, I read Nassim Taleb’s “The Black Swan – The Impact of the Highly Improbable”. The book blew my mind and set me on a path to find out what we can do in testing about what he writes about.
The book is about “the random events that underlie our lives, from bestsellers to world disasters. Their impact is huge; they’re nearly impossible to predict; yet after they happen we always try to rationalize them.” (from the backcover of the 2010 paperback edition)
As an engineer and a human, I think testers and test managers should not give up and leave to product owners, project managers or developers to interpret testing results take care of potential Black Swans. As a tester, I wish to embrace the possibility of Black Swans and do quality testing with the aim to qualify them.
I think, however, that we need new models in testing. The problem is that most of our techniques and heuristics tend to support us best on the functional testing level.

Accident Focused Testing?

The first part in solving a problem is accepting it. It sounds basic, but acceptance implies understanding what we are dealing with. Reading Talebs book asserted to me that we have to accept the fact that really bad things can happen in the world. Knowing what I do about information technology, I appreciate that his philosophy can be applied on technology. I also believe that the functional testing will not help us muc.
Mentally examining what I do as a tester, I understood that the idea of Black Swans is fundamental to the very nature of what we do and the systems we work with.
That much about acceptance.
The problem is that in some contexts – banking, healthcare, industrial plant management, safety systems, public sector, transportation etc – accidents and Black Swans could be of a nature where they cause irrecoverable losses, set lives at stake or otherwise be fundamentally unacceptable.
Let me give an example:
I recently came across a story of an interesting IT breakdown in a hospital in Sweden. It concerned something most people do on a regular basis: Apply the newest updates to our pc’s.
As updates were rolled out in the hospital, performance of pc’s started degrading. During the rollout the problems became worse, and before the rollout could be stopped all computers in the hospital became useless.
Now that the computers stopped working, undoing the rollout became extremely difficult and had to be carried out manually, one pc at a time.
In the end it took IT-operations several days to get everything back to normal. Meanwhile the hospital had to be run “on paper”.
The hospital used an uncommon Windows network configuration, which is not recommended by Microsoft which in combination with the Microsoft update triggered a problem in the network. What is interesting here is not the root cause, however: The outcome of a seemingly trivial update in a complex system turned out very bad.
It is easy to imagine how the stress experienced by doctors and nurses due to this situation could have affected patients. Someone could have been hurt.
We can shrug and blame Microsoft or the hospital IT operations. However, as skilled testers, I think we need to be able to provide some kind of answer as to how we can constructively contribute to hospital safety by qualifying even Black Swan-types of risks.

Systemic risks

Before moving on, let me dive into the subject of risk. Risk is something we all talk about, but do we really know what it means? I’m not sure. Risk is a common thing to talk about, but the concept of risk is in no way simple.
There seems to be at least three “risk domains” in software projects:

  • Some risks concern plans and schedules. Will the project be done on time and budget? That’s what we usually call “project risks”.
  • Other risks concern the product or system under development: Will it do what it is conceived to do? Will it do it correctly? These are called “product risks”.
  • Then there is a third class, a class of risks of a different nature: Systemic risks. They exist by combinations of systems, users, data, and environments.

Black Swans lurk in all three: Even simple products or components can sometimes fail in strange ways with huge impact. Just think of the defective Galaxy Note 7 battery problem, which was only a manufacturing problem with the battery, but one which has caused lots of harm to Samsung.
Black Swans are sometimes annoyingly simple.
But those kinds of Black Swans can be prevented by stricter quality control and similar tradtional measures. Project- and product risks are usually relatively easy to deal with using appropriate care in the context.
Systemic risks are different. They seem much more troublesome – and in some ways more interestng.

From simple to complex

Back in the early days of computing, I think systemic risks used to be rather uninteresting. Systems were simply… simple. Developing a new product, we would sometimes work to make sure usability was good, or the machine which the computer system was designed to control would work as a whole.
But that was it. Interfaces and interactions to other systems and contexts could be counted on one hand and there were usually very few connections to other computer interfaces etc.
If you have been interested in risk in software, you may have read about the Therac-25 accident. If not, let me summarize: A difficult-to-find multi-tasking bug in the control software of a radiation therapy machine turned out to be the root cause of apparantly random radiation burns of cancer patients placed in the machine for treatment. Some of these burns were fatal.
Obviously a Black Swan: A difficult-to-find bug in a lacking design.
The system was simple, however as there was only four components in the system: The user, the user interface software, the machine control software, and the machine itself. Of course there was also the patient, victims of the accidents, but they were only victims, receivers of the problem. (Some victims attempted to provide feedback though.)
The issue turned out to be a simple multitasking problem, where experienced operators who were fast on the keyboard used to control the machine could cause the software to enter illegal states. I.e. a software engineering problem.
Today, however, complexity is increasing. To me at least, it seems our industry has crossed a boundary: The number of components that work together in complex ways to realize important business functionality has grown significantly. While counting can never tell the truth, it is worrying that modern systems can be comprised of 10’s, even 100’s of components that are assumed to work seamlessly together on sunny days. Often no-one knows what will happen when the sun goes away and rain comes, so to speak.
Systemic risk in IT systems is no longer something that can be excluded from risk analysis and managed technologically.
So why are we not spending more time testing based on systemic risk analyses?

Explore the whole picture

Some readers might think of the Cynefin framework, and yes, I think it certainly appears promising as Cynefin provides a thought framework for understanding complex and complicated systems.
I went by a different path, however, when I explored the situation: I looked at safety engineering and mechanical safety analysis. I can recommend two books in particular:

(In a later blog post, I’ll come back to what a found in these two books, but you can get a peek of it in the presentation recording at the bottom of this blog. I’ll certainly also be coming back to Cynefin as it seems promising.)
But there might be a bigger problem to address too as it seems there is a management problem worsening the situation: Testers very often do not receive sufficient freedom to test the “big picture”.
When have you last heard of a tester tasked to test a product in complete integration with real users for a long time? I’d like to hear about examples of it, as very often, when I talk to people, I hear about product owners, project managers or C-level managers deciding and controlling tightly what should be tested.
And risk reporting to the rest of the organization is filtered through these levels.
Focus is too often only on going live on time, on schedule, no matter what. Too seldomly on qualifying complex or systemic risks.
I think testers should be tasked to explore the dynamics of the product in contexts resembling the real world.

Speaking about testing in a Black Swan Domain

I spoke about this first time at the first Let’s Test conference in 2012 in Stockholm (slides – PDF) and second time in September 2016 at the New Testing Conference during “Reinventing Testers Week” in New York. Scroll down to see a recording of the latter presentation.
The feedback I received at those two events has confirmed to me that this is a subject that needs exploration. Our craft can be advanced to go below the functional, performance, or usability perspectives. New models in testing, heuristics and even types of testing strategies can be developed, I think.
Going alone can be difficult, and I’m therefore extremely grateful to have received moral backing from both Michael Bolton and Fiona Charles. Additionally, Anna Royzman has agreed to co-host a peer workshop on the subject in New York with me in connection with her May conference.
I find New York an interesting place for a few reasons:

  • It is where I talked about the subject last time.
  • Nassim Taleb lives in New York.
  • It is a very big city, so big, that it’s even difficult to comprehend for someone like me who comes from a little county with less than half the population of it. New York is seems a complex system beyond imagination.
  • It is the world’s financial centre, and some of the systems running that are extremely complex. I try not to think about what types of systemic risk they manage on a daily basis.

If you are interested, feel you have something to contribute with, have time, etc, it would be great to see you at the first WOTBLACK: Workshop on Testing in Black Swan Domains on Sunday April 30th in New York.
The objective?
Advance the testing craft by co-developing and sharing models, heuristics, and strategies.
Write me an e-mail if you’re interested in participating, or ping me on twitter if you feel you have something to share now or wish to start a discussion about the subject.

Categories
Blog posts in English

Are you playing the Russian roulette? Learning from failure

I think most (if not all?) testers have witnessed situations like this: A new feature of the system put into production, only to crash weeks, days or just hours later.
”Why didn’t anybody think of that?!”
Truth is, quite often, somebody did actually think about the problem, but the issue was not realised, communicated or accepted.
Below is the story about the space shuttle Challenger accident in 1986.
Disaster…
Twentynine years ago, space shuttle Challenger exploded seven minutes into the flight killing the seven astronauts aboard.
Theoretical physicist Richard Feynman was a member of the accident commision. During the hearings he commented that the whole decision making in the shuttle project was ”a kind of Russian roulette”.
The analogy is striking. Russian roulette is only played by someone willing to take the risk to die.
I don’t know anyone who deliberately want to play the Russion roulette, so why did they play that game?
Feynman explains: [The Shuttle] flies [with O-ring erosion] and nothing happens. Then it is suggested, therefore, that the risk is no longer so high for the next flights. We can lower our standards a little bit because we got away with it last time…. You got away with it but it shouldn’t be done over and over again like that.
The problem that caused the explosion was traced down to leaking seals in one of the booster rockets. On this particular launch ambient temperatures were lower than usual and for that reason the seals all failed. The failed seals allowed very hot exhaust gasses to leak out of the rocket combustion chamber, and eventually, these hot gasses ignigted the many thusand litres of higly explosive rocket fuel.
Challenger blew up in a split second. The seven astronauts probably didn’t realise they were dying before their bodies were torn in pieces.
It was a horrible tragedy.
Chapter 6 of the official investigation report is titled: ”An accident rooted in history.”
The accident was made possible because of consistent misjudgements and systematically ignored issues, poor post flight investigations, and ignored technical reports. The accident was caused because three seals failed on this particular launch, but the problem was known and the failure was made possible because it was systematically ignored.
The tester’s fundamental responsibilites
As a tester, I have three fundamental responsibilities:

  1. Perfom the best possible testing in the context
  2. Do the best possible evaluation of what I’ve found and learnt during testing.  Identify and qualify bugs and product risks.
  3. Do my best to communicate and advocate these bugs and product risks in the organisation.

The Challenger accident was not caused by a single individual who failed detecting or reporting a problem.
The accident was made possible by systemic factors, i.e. factors outside the control of any individual in the programme. Eventually, everyone fell into the trap of relying on what seemed to be “good experience”. The facts should have been taken seriously.
A root cause analysis should never only identify individual and concrete factors, but also systemic factors which enabled the problem to survive into production.
Chapter 6 of the Challenger report reminds me that, when something goes wrong in production, performing a root cause analysis is a bigger task than just finding out the chain of events that lead to problem.
Many thanks to Chi Lieu @SomnaRev for taking time to comment early drafts of this post.

Photo of the space shuttle Challenger accident Jan. 28, 1986. Photo credit: NASA
Photo of the space shuttle Challenger accident Jan. 28, 1986. Photo credit: NASA