Categories
Blog Blog posts in English

Playful Software Testing

I met with and enjoyed a very good conversation with Jessica Ingrassellino in New York back in September. Jessica presented a workshop on playful testing during the Reinventing Testers Week (I presented at the conference about “Testing in a Black Swan Domain” which, unfortunately, I have not had time to write about yet).
We talked mostly about philosophy.
Jessica is quite a multi-talent: Plays the violin virtously, is an educated music teacher, has switched career to testing, taught herself Python, authored a book on Python programming for kids, and is teaching Python classes at a local community college, as well as music classes.
She has a vision of making testing playful and fun.
Structured work govern testing in professional settings, work which has nothing to do with play. So why is play important?
Jessica puts it this way:

When the power of play is unleashed in software testing, interesting things happen: The quality of the testing performance becomes noticeably better, and the outcomes of it too. This results in better software systems, higher product quality.

I have a product engineering background and play is important for me too. Engineers have methods, calculations, and procedures, but great engineers know that good solutions to problems are not found by orderly, rational processes. Good solutions depend on creativity and play.
Friday December 9th, I met with Mathias Poulsen in Copenhagen. Mathias is the founder of CounterPlay, a yearly conference and festival on serious play in Aarhus, the second largest city in Denmark.
About three years ago, Mathias got the idea for the conference.
In the first year, 2014, it was an immediate success with more than 20 talks and workshops in 3 tracks on “Playful Culture, Playful Learning, and Playful Business”, and more than 150 participants. This year (2016), the conference had 50 scheduled sessions: keynotes, talks, workshops, mini-concerts and open sessions.
Mathias explains (about 0:30 into the video):

Counterplay is basically an attempt to explore play and being playful across all kinds of domains and areas in society. We are trying to build a community of playful people around the world to figure out, what does it mean to be playful and why do we think it is beneficial?

Processional IT has so far not been represented at the conference, Mathias told me. I found that a bit surprising, as at the moment almost everything in IT seems to be buzzing with concepts promising joy and fun – play.
Sometimes, however, there is an undertone to all the joy. Agile and DevOps have become popular concepts even in large corporations, and to me, both strive to combine productivity with playfulness. That is good.
But is the switch to Agile always done in order to pass power to developers and testers, allowing them to playfully perform, build and test better solutions? No, not always.
Play facilitate change and breaking of unhelpful patterns, but sometimes play is mostly a cover for micromanagement. There is a word for this: In a recent blog post, Mathias talks about playwashing:

Playwashing describes the situation where a company or organization spends more time and money claiming to be “playful” through advertising and marketing than actually implementing strategies and business practices that cultivate a playful culture in said organization.

A question is therefore how we genuinely support play? Are there methods or processes that better accommodate playfulness at work?
I believe there is. Processes need to leave space for exploring context, knowledge sharing and actual interaction with customers, stakeholders and team members.
But processes or methods will not do the job alone. In fact, putting play under the examination of psychology or cognitive sciences will never be able to grasp what play really is.
Play is more like music and poetry, where ideas based on assumptions about order, rational choice, and intention cannot explain anything.
Philosophy and especially the dialectical exploration of what it means being a playful human is much better at embracing what play means to us and how to support it.
Jessica and I are working on a workshop about playful and artful testing. It will combine ideas of playful testing with philosophy.
We are certain that breaking out of patterns will help testers, and breaking out of our patterns, participating in a conference which is fully devoted to play will teach us a lot.

dsc_5398
I took this photo in the local forest on a walk with our dog Terry (the black poodle). It is obvious, when dogs play well, that they have fun and learn a lot through play. Play seems a fundamental capacity for mammals.

Categories
Blog posts in English

Why you should do your job better than you have to

Software testers evaluate quality in order to help others make descisions to improve quality. But it is not up to us to assure quality.
Projects need a culture in which people care for quality and worry about risk, i.e. threats to quality.
Astronaut and first man on the moon Neil Armstrong talked about reliability of components in the space craft in the same interview I quoted from in my last post:

Each of the components of our hardware were designed to certain reliability specifications, and far the majority, to my recollection, had a reliability requirement of 0.99996, which means that you have four failures in 100,000 operations. I’ve been told that if every component met its reliability specifications precisely, that a typical Apollo flight would have about 1,000 separate identifiable failures. In fact, we had more like 150 failures per flight, substantially better than statistical methods would tell you that you might have.

Neil Armstrong not only made it to the moon, he even made it back to Earth. The whole Apollo programme had to deal very carefully with the chance that things would not work as intended in order to make that happen.
In hardware design, avoiding failure depends on hardware not failing. To manage the risk of failure, engineers work with reliability requirements, e.g. in the form of a required MTBF – mean time between failure – for individual components. Components are tested to estimate their reliability in the real system, and a key part of reliability management is then to tediously add all the estimated relibility figures together to get an indication of the reliability of the whole system: In this case a rocket and space craft designed to fly men to the moon and bring them safely back.
But no matter how carefully the calculations and estimations are done, it will always end out with an estimate. There will be surprises.
The Apollo programme turned out to perform better than expected. Why?
When you build a system, it could be an it-system or a space craft, then how do you ensure that things work as intended? Following good engineering practices is always a good idea, but relying on them is not enough. It takes more.
Armstrong goes on in the interview (emphasized by me):

I can only attribute that to the fact that every guy in the project, every guy at the bench building something, every assembler, every inspector, every guy that’s setting up the tests, cranking the torque wrench, and so on, is saying, man or woman, “If anything goes wrong here, it’s not going to be my fault, because my part is going to be better than I have to make it.” And when you have hundreds of thousands of people all doing their job a little better than they have to, you get an improvement in performance. And that’s the only reason we could have pulled this whole thing off.

Categories
Blog posts in English

Bohr on testing

When Niels Bohr and his team of geniuses at his institute in Copenhagen  developed quantum physics, they fundamentally changed science by proving wrong an assumption dating very far back in science: That everything happens for a reason. In science, building and verifying models to predict events based on knowledge of variables is an important activity, and while this is still meaningful in certain situations, quantum mechanics proved that on the microscopic level of atoms and particles, things don’t happen for a reason. Put in another way: You can have effects without cause. In fact, effects don’t have causes as such at this level.
This means that in particle physics, you cannot predict precisely what’s going to happen, even if you know all variables of a system. Well in fact you can, but only if you’re prepared to give up knowing anything about when and where the effect will happen.
This is counterintuitive to our daily understanding of how the world works. But there’s more: According to quantum physics, it is impossible to seperate knowledge of variables of a system from the system itself. The observation of the system is always part of the system, and thus changes the system in an unpredictable way.
If you find this to be confusing, don’t be embarrassed. Even Einstein never accepted this lack of causality.
Bohr was a great scientist, but he was also a great philosopher. He did a lot of thinking about what this lack of causaility and the inseperability of observation from events would teach us about our understanding of nature. On several occasions he pointed out that even on the macroscopic level, we cannot ignore what is happening on the atomic and particle level. First of all because quantum physics did away with causality as a fundamental principle, but also because quantum effects are in fact visible to us in our daily macroscopic life: He used the example of the eye being able to react on stimuli as small as those of a single photon and argued that it is very likely that the entire organism contains other such amplification systems where microscopic events have macroscopic effects. In some of his philosophical essays he points out how psychology and quantum mechanics follow similar patterns of logic.
So does testing. In software testing we are working to find out how a compuster system is working. Computers are algorithmic machines designed in such a way that randomness is eliminated and data can be measured (read) without affecting the data, but the programs are written by humans and are used by humans, so the system in which the computer program is used is both complex and inherently unpredictable.
We’re also affecting what we’re testing. Not by affecting the computer system itself, but by affecting the development of the software by discovering facts about the system and how it works in relation to other systems and the users.
In some schools of software testing, the activity is reduced to a predictable one: Some advocate having “a single point of truth” about what is going to be developed in an iteration, and that tests should verify that implementation is correct – nothing more. They beleive that it is possible to assemble “all information” about a system before development starts, and that any information not present is not a requirement and as such should not be part of the delivery.
That is an incorrect approach to software engineering and to testing in particular. Testing is much more than verification of implementation, and the results of testing are as unpredictable as the development process itself is. We must also remember that it is fundamentally impossible to collect all requirements about a product: We can increase the probability of getting a well working system by collecting as much information as possible about how the system should work and how it is actually working (by testing), and comparing the two, but the information will always be fundamentally incomplete.
Fortunately we’re not stupid. It is consistent with quantum physics.
Studying the fundamental mechanisms of nature can lead to a better understanding of what we are working with as software engineers and as software testers in particular.

My son Jens at Tisvilde beach, where Niels Bohr spent a lot of time with friends, familiy and physicists

Categories
Blog posts in English

Code is Poetry – But What About Testing?

WordPress used to have a tagline saying “code is poetry”, apparantly Microsft has used it too. I don’t know who came up with it first, but I agree. I wrote my first piece of BASIC code almost 30 years ago, my first machine code program more than 25 years ago, and learned C also about 25 years ago. Code can be as meaningful and meaningless as poetry, so the analogy is correct. To me at least.
Now, many years later, I’m a professional tester. I find problems with other peoples business processes, architecture, software designs, and implementations. I don’t write code any more (except when I’m using it to test something).
But where’s the poetry? Is testing essentially a non-poetic activity?
I don’t think so. Yet, it doesn’t make sense to say “tests are poetry”. They aren’t. Then, what are they?