The pervasive, but intangible nature of Black Swans means that that practical testing with the aim of demonstrating actual problems is probably either not going to give any useful evidence, or so much resources will be spent proving a point, that one might be missing the point itself completely.
This was the fundamental problem I was facing when I finished reading Talebs book and wanted to apply his philosophy into actual testing. I realised that I needed a model, and the Skype incident of december 2010 led me in the right direction.
The model that I’ve come up with is that black swans are system errors. This may not be true in all circumstances, but it’s a good model and it’s helping me come up with solutions to the testing problems.
Unfortunately, treating black swans as system errors also mean that instead seeing Black Swan Testing as a practical testing activity, I’m moving it to a meta level, where the ‘root causes’ are of a more abstract nature and often not directly observable.
In my speach here at Let’s Test yesterday, I introduced three classes of system attributes and suggested that practical testing, with the aim of learning about potential black swan incidents in a system, should focus on these attributes:
- Complex versus Linear Interactions
- Tight versus Loose Couplings
The two first come from the work of sociologist Charles Perrow, in particular his book Normal Accidents, the third on I owe to psychologist James Reason, author of Human Error. I’ll come back to these attributes in later blog posts, but for now you just have to accept them as system attributes that play parts in system errors and Black Swans.
But we’re at a conference with all sorts of things going on: My presentation was well received, the discussions afterwards were great, but Let’s not just talk… let’s do it, Let’s Test.
I think James Lyndsay and I got the idea at about the same time yesterday: Let’s take Black Swan Testing into The Test Lab.
So I did, and it was great. I had a great team of very brave testers, and the mission was clear: Find indications of Black Swans, look for tight couplings, complex interactions, and barriers.
Did we succeed? Not really. But it was loads of fun and we learned a lot!
In particular, I learned that while I think I have a very good idea of what Black Swan Testing is, I need to work on the practical aspects: Making useful charters, coaching and teaching testers efficiently on the subject, reporting… Black Swan testing must be communicated and operationalized.
That’s the next problem, I’m going to address.
Gosh, I’m feeling a bit used… The schedule is really tight here and the party yesterday was great. Lots of input! Plus, I was really focused on my presentation today, I spent a bit extra “cognitive energy” on that ‘output’ process… Here’s what I’ve done today:
6-7: Wake up
7-8: Walk in nature (found some freesbee throwing creatures…)
9.30: Rob Sabourin keynote (really good!)
11.30-12.30: Presenting Testing in the Black Swan domain
12.30-13-30: Lunch, having very interesting talk with Rob Sabourin and others
13.30-14.30: Curing our Binary Disease with Rikard Edgren (inspiring, we’re all growing older and maturing, I guess)
14.40-15.40: You are a scientist with Christin Wiedemann (I loved it – I’m a scientist too)
15.40-16: Coffee, cookies, smoothies (Runö is such a great venue!)
16-17: Coaching Testers with Anne-Marie Charret (inspiring!)
My presentation was really well received and we had a great discussion afterwards. I love the discussions we are having here! What better place could I have found to first present my ideas about Black Swan testing than here among people who share the passion for testing, trying things, forming hypothesis and learning?
We’ll be doing experiments in the Test Lab tonight at 20, so discussions are’nt over yet. I love it!
Here are a few selected photos from today. I may do a writeup about the day later today or tomorrow. Maybe.
My head feels a little dizzy, but outside my hotel window, the sun is again shining from a clear blue sky, little birds are singing, and it’s looking to be a wonderful day today. Runö, the conference venue, is a really nice place- and so is Let’s Test! If you’re here, you will know what I mean, if not you just have to believe me: Never have I been with such a friendly and bright bunch of people! I love it!
After the opening keynote yesterday, I went to on my presentation for today. I was almost done, but then the team (that’s me!) decided to refactor everything! Oh dear! But it had too many slides, and they just didn’t work well together, and that’s a showstopper, right?
The good news is that the I got the presentation fixed: The refactoring succeeded! Thanks to friends and colleagues for allowing me to reflect with you on the subject (there’s the friendly thing again!).
The bad news is that I had to skip the tutorial I planned to go to.
I didn’t skip tutorials completely, though, as Iwas very kindly allowed to jump in on Henrik Andersson and Leo Hepis’ tutorial “Now, what’s your plan?”, which started at 3 pm.
I’m sure all the workshops were excellent, but this was really, really cool: During the 3 hours, we got to develop test strategies for our testing team, incorporate really challenging context changes, learn about what context is, and discuss buth our own and other team’s approaches and solutions to the challenges we were put through.
Normally, context is something which is “just there”. As a team member, I’m often not given all the needed knowledge about context, but still I have to relate to it anyway and develop my own test strategies, or when I’m given management responsibilities, the strategy of the whole team. Still, the context is shaping my strategy and it does so in so many ways. And then we have context change: Things aren’t static, right? Although we all prefer working in stable environments, things do often change: Sometimes to the better, sometimes to the worse, sometimes just to something different, but the point is that we cannot disregard context changes, since they affect us whether we want it or not.
How do teams react to context changes? I observed at least four different “reaction modes” within the teams during the workshop:
- “Ok, what’s this?”: It’s a completely new situation and there are no prejudices or previous context to take into account. This is fun and generally feels good.
- Resistance, chaos, integration, new status quo – i.e. all of the phases of the Satir Change Model. This can be a difficult process, especially it the team resides in the “chaos” phase for long.
- Relief: A context change clears everything up, and the project can go from “problem fixing” to “solution mode”. This feels very good too.
- Panic: The context change is sudden and feels like a bomb had been dropped in the middle of the project and is now threatening to blow everything up: The team panics. Hopefully, the bomb can be defused and the panic can be cleared.
- The variables which significantly influence the task
- Those aspects of the total environment that seem important/relevant
- Context is anything that changes my model
What’s your definition?
As we approached Arlanda, I noticed something I recognised: I had checked out the conference venue on Google Eearth at home, and here was something that looked familiar:
“Can I get off here, please?”, I asked.
“Sorry no, we’re out of parachutes”, the air hostess replied.
Ok, I could live with that: Esko Arajärvi was flying in from Finland, and thanks to the excellent travelling page of the conference, he had offered me a ride in the car he rented, and he was waiting for me. I love when things go as planned!
Runö is a very beautiful place. “Is this heaven?” Michael Bolton asked in his opening keynote, but then said no, since his family wasn’t here. Good point!
Michael was as good as ever, telling us that while everyone has a desire for certainity, we’re fundamentally incompetent whenever we do something new. Testing is fundamentally about acceptance, acceptance that we’re fallible, incompetent, etc, but we can learn from failure.
Mature people don’t try to get rid of failure, they manage it. So do we. We know that failure is an important part of the process and the trigger for us is to find out what’s wrong and make it better.
In fact, certainty can be damaging, and “a key part of our service is to reduce unwarranted and potentially damaging certanity about the product.”
I’m taking a break now. I’m still working my presentation tomorrow; I work best under pressure, and the pressure is building up! Besides, Michael gave me more good inspiration which I have to process!
One thing you don’t have to worry about if you are NOT going to Let’s Test is how to get there. For family reasons, I had to wait until Monday morning to travel, and getting there from Copenhagen means getting up very early. It was 4 in the morning when I got up. 5 hours later, I’ll be listening to Michael Bolton’s opening keynote 🙂
So what am I expecting?
First of all: Good discussions about issues in testing. Testing requires reflection and thinking, and I need the input to stay razor sharp as a tester 😉
Second: Get great feedback on my ideas and thoughts about how we work with Black Swans in testing. We need better heuristics to sharpen our fucus, and I’m hoping – no: expecting that my ideas coupled with the feedback from conference participants will teach me and everyone else more on the subject.
SK400 is leaving in 30 mins now. I think it’s time for a cup of coffee!
It’s time to try something new for me: I have decided to do live blogging from Let’s Test conference on context driven testing next week. I’m mentally almost 100% focused on my own presentation about on Tuesday morning, but I’ll try to keep up with as much as possible to share it here with those who can’t be there in person – yes, I’m thinking first and foremost on my friend Jesper Ottosen @jlottosen