Billederne af den tilsyneladende generøse præsident, der ved indsættelsen selvsikkert og iscenesat giver magten tilbage til folket var et tema på de sociale medier. Han sammenlignedes med en bestemt tyran fra en Batman-film, der brugte samme ord og udtryk. Det er urkomisk og skræmmende på én gang. Tiden vil vise, hvad der sker i USA nu.
Når jeg tænker på frihed og ansvar, så kommer relationen til mine medmennesker altid og forplumrer billedet af den våde drøm om at kun at være mig selv nærmest. Drømmen bliver et mareridt. For, hvem ønsker at være i fuld kontrol over sit liv og dermed alt omkring omgivelserne? Er man så ikke helt alene om det hele, og dermed helt, helt ensom?
Jeg må derfor straks vende tilbage til tankerne om, at min frihed kun kan leve i fællesskab med mine medmennesker, og deraf opstår vist nødvendigheden af noget ”givende”, ikke kun i en større sags tjeneste, eller blot for kollektivets skyld.
Jeg kan simpelthen ikke føle mig fri uden at give.
Temaet for næste samtalesalon er generøsitet, for vi er nysgerrige på, hvad vi mon kan finde ud af ved samtalen om den. Den er så smuk og fin, men det kunne jo for eksempel også tænkes, at der ind imellem skulle sættes grænser for den? Vi ved det ikke.
Tilmelding er obligatorisk- og det er afmelding i øvrigt også, hvis du bliver forhindret. Hvis vi har tomme stole vil vi nemlig gerne kunne give dem til andre interesserede.
Vi glæder os til at se dig!
Dato: 6. marts 2017
Tidspunkt: 16.00 – 18.30
Sted: Gjesing Coaching, Prinsesse Charlottesgade 31, kld, 2200 København N
Tilmelding til: karengjesing@privat.dk (obligatorisk af hensyn til forplejning)
Kærlige hilsner
Karen og Anders
Month: January 2017
Summary: Last year in September, I spoke at Anna Royzman’s New Testing Conference “Reinventing Testers Week” in New York about testing in Black Swan domains. The title of the talk refers to Nassim Taleb’s book “Black Swan” and concerned testing in contexts where improbable risks can have disproportionate effects. This blog contains an invitation to a peer conference in New York on the subject Sunday April 30th.
Sometimes things happen which appear to be beyond “the possible”.
This awareness haunts us in testing: We aim to get those important bugs to materialize. We want to qualify real and serious risks. Yet, our stakeholders have to accept that no matter how much testing is done, we cannot cover everything.
Logically, testing has to be focused on what is important to the business, and what might go wrong in reality. To me, that is at the core of risk based testing.
But saying that is one thing; doing it in reality is quite another. Certain risks seem almost impossible to qualify through testing. How should our stakeholders interpret the absence of clear testing results, for example, when we are trying our best to dig out quality information about quality? Could there be a serious problem lurking? The thought may seem paranoid, but experience shows it is not.
Years ago, I read Nassim Taleb’s “The Black Swan – The Impact of the Highly Improbable”. The book blew my mind and set me on a path to find out what we can do in testing about what he writes about.
The book is about “the random events that underlie our lives, from bestsellers to world disasters. Their impact is huge; they’re nearly impossible to predict; yet after they happen we always try to rationalize them.” (from the backcover of the 2010 paperback edition)
As an engineer and a human, I think testers and test managers should not give up and leave to product owners, project managers or developers to interpret testing results take care of potential Black Swans. As a tester, I wish to embrace the possibility of Black Swans and do quality testing with the aim to qualify them.
I think, however, that we need new models in testing. The problem is that most of our techniques and heuristics tend to support us best on the functional testing level.
Accident Focused Testing?
The first part in solving a problem is accepting it. It sounds basic, but acceptance implies understanding what we are dealing with. Reading Talebs book asserted to me that we have to accept the fact that really bad things can happen in the world. Knowing what I do about information technology, I appreciate that his philosophy can be applied on technology. I also believe that the functional testing will not help us muc.
Mentally examining what I do as a tester, I understood that the idea of Black Swans is fundamental to the very nature of what we do and the systems we work with.
That much about acceptance.
The problem is that in some contexts – banking, healthcare, industrial plant management, safety systems, public sector, transportation etc – accidents and Black Swans could be of a nature where they cause irrecoverable losses, set lives at stake or otherwise be fundamentally unacceptable.
Let me give an example:
I recently came across a story of an interesting IT breakdown in a hospital in Sweden. It concerned something most people do on a regular basis: Apply the newest updates to our pc’s.
As updates were rolled out in the hospital, performance of pc’s started degrading. During the rollout the problems became worse, and before the rollout could be stopped all computers in the hospital became useless.
Now that the computers stopped working, undoing the rollout became extremely difficult and had to be carried out manually, one pc at a time.
In the end it took IT-operations several days to get everything back to normal. Meanwhile the hospital had to be run “on paper”.
The hospital used an uncommon Windows network configuration, which is not recommended by Microsoft which in combination with the Microsoft update triggered a problem in the network. What is interesting here is not the root cause, however: The outcome of a seemingly trivial update in a complex system turned out very bad.
It is easy to imagine how the stress experienced by doctors and nurses due to this situation could have affected patients. Someone could have been hurt.
We can shrug and blame Microsoft or the hospital IT operations. However, as skilled testers, I think we need to be able to provide some kind of answer as to how we can constructively contribute to hospital safety by qualifying even Black Swan-types of risks.
Systemic risks
Before moving on, let me dive into the subject of risk. Risk is something we all talk about, but do we really know what it means? I’m not sure. Risk is a common thing to talk about, but the concept of risk is in no way simple.
There seems to be at least three “risk domains” in software projects:
- Some risks concern plans and schedules. Will the project be done on time and budget? That’s what we usually call “project risks”.
- Other risks concern the product or system under development: Will it do what it is conceived to do? Will it do it correctly? These are called “product risks”.
- Then there is a third class, a class of risks of a different nature: Systemic risks. They exist by combinations of systems, users, data, and environments.
Black Swans lurk in all three: Even simple products or components can sometimes fail in strange ways with huge impact. Just think of the defective Galaxy Note 7 battery problem, which was only a manufacturing problem with the battery, but one which has caused lots of harm to Samsung.
Black Swans are sometimes annoyingly simple.
But those kinds of Black Swans can be prevented by stricter quality control and similar tradtional measures. Project- and product risks are usually relatively easy to deal with using appropriate care in the context.
Systemic risks are different. They seem much more troublesome – and in some ways more interestng.
From simple to complex
Back in the early days of computing, I think systemic risks used to be rather uninteresting. Systems were simply… simple. Developing a new product, we would sometimes work to make sure usability was good, or the machine which the computer system was designed to control would work as a whole.
But that was it. Interfaces and interactions to other systems and contexts could be counted on one hand and there were usually very few connections to other computer interfaces etc.
If you have been interested in risk in software, you may have read about the Therac-25 accident. If not, let me summarize: A difficult-to-find multi-tasking bug in the control software of a radiation therapy machine turned out to be the root cause of apparantly random radiation burns of cancer patients placed in the machine for treatment. Some of these burns were fatal.
Obviously a Black Swan: A difficult-to-find bug in a lacking design.
The system was simple, however as there was only four components in the system: The user, the user interface software, the machine control software, and the machine itself. Of course there was also the patient, victims of the accidents, but they were only victims, receivers of the problem. (Some victims attempted to provide feedback though.)
The issue turned out to be a simple multitasking problem, where experienced operators who were fast on the keyboard used to control the machine could cause the software to enter illegal states. I.e. a software engineering problem.
Today, however, complexity is increasing. To me at least, it seems our industry has crossed a boundary: The number of components that work together in complex ways to realize important business functionality has grown significantly. While counting can never tell the truth, it is worrying that modern systems can be comprised of 10’s, even 100’s of components that are assumed to work seamlessly together on sunny days. Often no-one knows what will happen when the sun goes away and rain comes, so to speak.
Systemic risk in IT systems is no longer something that can be excluded from risk analysis and managed technologically.
So why are we not spending more time testing based on systemic risk analyses?
Explore the whole picture
Some readers might think of the Cynefin framework, and yes, I think it certainly appears promising as Cynefin provides a thought framework for understanding complex and complicated systems.
I went by a different path, however, when I explored the situation: I looked at safety engineering and mechanical safety analysis. I can recommend two books in particular:
- Normal Accidents by sociologist Charles Perrow
- Human Error by psychologist James Reason
(In a later blog post, I’ll come back to what a found in these two books, but you can get a peek of it in the presentation recording at the bottom of this blog. I’ll certainly also be coming back to Cynefin as it seems promising.)
But there might be a bigger problem to address too as it seems there is a management problem worsening the situation: Testers very often do not receive sufficient freedom to test the “big picture”.
When have you last heard of a tester tasked to test a product in complete integration with real users for a long time? I’d like to hear about examples of it, as very often, when I talk to people, I hear about product owners, project managers or C-level managers deciding and controlling tightly what should be tested.
And risk reporting to the rest of the organization is filtered through these levels.
Focus is too often only on going live on time, on schedule, no matter what. Too seldomly on qualifying complex or systemic risks.
I think testers should be tasked to explore the dynamics of the product in contexts resembling the real world.
Speaking about testing in a Black Swan Domain
I spoke about this first time at the first Let’s Test conference in 2012 in Stockholm (slides – PDF) and second time in September 2016 at the New Testing Conference during “Reinventing Testers Week” in New York. Scroll down to see a recording of the latter presentation.
The feedback I received at those two events has confirmed to me that this is a subject that needs exploration. Our craft can be advanced to go below the functional, performance, or usability perspectives. New models in testing, heuristics and even types of testing strategies can be developed, I think.
Going alone can be difficult, and I’m therefore extremely grateful to have received moral backing from both Michael Bolton and Fiona Charles. Additionally, Anna Royzman has agreed to co-host a peer workshop on the subject in New York with me in connection with her May conference.
I find New York an interesting place for a few reasons:
- It is where I talked about the subject last time.
- Nassim Taleb lives in New York.
- It is a very big city, so big, that it’s even difficult to comprehend for someone like me who comes from a little county with less than half the population of it. New York is seems a complex system beyond imagination.
- It is the world’s financial centre, and some of the systems running that are extremely complex. I try not to think about what types of systemic risk they manage on a daily basis.
If you are interested, feel you have something to contribute with, have time, etc, it would be great to see you at the first WOTBLACK: Workshop on Testing in Black Swan Domains on Sunday April 30th in New York.
The objective?
Advance the testing craft by co-developing and sharing models, heuristics, and strategies.
Write me an e-mail if you’re interested in participating, or ping me on twitter if you feel you have something to share now or wish to start a discussion about the subject.
On Tuesday Janaury 24th, we will be hosting a meetup on Risk Based Testing Strategy under Ministry of Testing Copenhagen Meetup group in Herlev.
Make sure you register as a member of the Ministry of Testing Copenhagen meetup group to stay tuned when new meetups are announced.
Also, don’t miss the Ministry of Testing home page to learn about other meetups, TestBash, news, and lots of useful testing resources.