Categories
Indlæg på dansk

Efter 15 år som freelancer tør jeg godt tvivle på mig selv

I disse dage er det 15 år siden jeg tog springet og gik freelance. Det har jeg ikke fortrudt!
Jeg er blevet hyret ind som eksperten, der skal gøre det komplicerede enkelt og løse problemer. For det meste I lange kontrakter, men altid som den frie fugl. Jeg elsker det faktisk!
Konsulentjobbet kræver masser af kærlighed: Kærlighed til problemerne, der skal løses og kærlighed til de mennesker som har problemer. Ja, og kunden. Der følger også mere kedelige ting med: Kontrakter, fakturering,… den slags. De er en del af gamet.
I gamet er også en forventning om performance: At vi hurtigt kan gå ind og “levere varen” – uden slinger i valsen.
Ydmyghed er faktisk utrolig afgørende. For, – hånden på hjertet – konsulenter er langt fra perfekte og slet ikke ufejlbarlige.
Specialistrollen og forventningen om den sikre performance må aldrig komme til at betyde, at man ender med næsen i sky. Jeg kan godt blive lidt flov, hvis jeg ind imellem møder en anden konsulent med en attitude i retning af at de er universaleksperter, der altid ved bedst.
Jeg synes jeg selv er rimeligt god til at undgå den attitude. For mig hjælper det at jeg jævnligt mindes om nogle af de fejl jeg har begået. Efter 15 år i rollen har jeg ikke længere tal på, hvor tit jeg har fejlet i en opgave. Pinligt, men sandt. Og nu har jeg sagt det!
Den klassiske pinlige situation for mig som tester er et ”bugslip”: Kunden vil gerne have testet, at systemet vi arbejder med viser netop et bestemt resultat og jeg er hyret ind til at dokumentere kvaliteten af systemet inden vi går i produktion med det.
Jeg er testekspert og har indsigt i teknikken og projektet. Jeg udfører ordren. Det ser fint ud. Vi overholder planen. Alt er godt.
Men så kommer der melding om en fejl i produktion, og endda et åbenlyst problem som jeg simpelthen overså da jeg testede.
I sådan en situation er det ikke rart, at være i mine sko. Puh, jeg husker hver eneste gang det er sket, og det er mere end en gang! Det ligger desværre i testerjobbet, at det sker. Jeg prøver, at afstemme forventningerne om det, men sjovt er det aldrig.
Den situation og andre fejl jeg har haft del i har lært mig, at nok er det ret vigtigt at bruge sin erfaring og ekspertise, men det er også vigtigt at kunne tvivle på sig selv. Ja, tvivle: At vide, at ekspertise tit er langt fra nok til at garantere succes.
Sommetider er det faktisk netop ekspertisen, der står i vejen for at man gør det godt.
En generel ting jeg har tænkt lidt over (men ikke tænkt færdig) er, at vi alle faktisk burde blive bedre til at improvisere. Altså improvisere ærligt og blive dygtige til det: Fejle kontrolleret, observere det vi kan lære – og gøre bedre, fejle lidt mindre, evaluere, gøre det meget bedre.
Altså blive bedre til at undgå at lade os blænde af tidligere gode erfaringer – og derfor misse det åbenlyse.
Jeg tror i alle tilfælde på, at det er en kvalitet, når jeg som konsulent tager tvivlen med på arbejde – som en god ven, der hjælper til at jeg gør mit bedste. Og jeg tror på, at det er en kvalitet, hvis jeg deler tvivlen på en konstruktiv måde, så vi i fællesskab kan bruge den til at gøre vores bedste.
Ekspertisen og erfaringen er stadig vigtig. Men tvivlen må vi alig glemme.
I øvrigt føler jeg mig klar til at tage 15 år mere. Måske ses vi derude! Og bliv ikke overrasket, hvis jeg er eksperten, der tvivler.

Categories
News in English

January 24th: Meetup on Risk Based Testing Strategy

On Tuesday Janaury 24th, we will be hosting a meetup on Risk Based Testing Strategy under Ministry of Testing Copenhagen Meetup group in Herlev.
Make sure you register as a member of the Ministry of Testing Copenhagen meetup group to stay tuned when new meetups are announced.
Also, don’t miss the Ministry of Testing home page to learn about other meetups, TestBash, news, and lots of useful testing resources.

Categories
Blog posts in English

Why you should do your job better than you have to

Software testers evaluate quality in order to help others make descisions to improve quality. But it is not up to us to assure quality.
Projects need a culture in which people care for quality and worry about risk, i.e. threats to quality.
Astronaut and first man on the moon Neil Armstrong talked about reliability of components in the space craft in the same interview I quoted from in my last post:

Each of the components of our hardware were designed to certain reliability specifications, and far the majority, to my recollection, had a reliability requirement of 0.99996, which means that you have four failures in 100,000 operations. I’ve been told that if every component met its reliability specifications precisely, that a typical Apollo flight would have about 1,000 separate identifiable failures. In fact, we had more like 150 failures per flight, substantially better than statistical methods would tell you that you might have.

Neil Armstrong not only made it to the moon, he even made it back to Earth. The whole Apollo programme had to deal very carefully with the chance that things would not work as intended in order to make that happen.
In hardware design, avoiding failure depends on hardware not failing. To manage the risk of failure, engineers work with reliability requirements, e.g. in the form of a required MTBF – mean time between failure – for individual components. Components are tested to estimate their reliability in the real system, and a key part of reliability management is then to tediously add all the estimated relibility figures together to get an indication of the reliability of the whole system: In this case a rocket and space craft designed to fly men to the moon and bring them safely back.
But no matter how carefully the calculations and estimations are done, it will always end out with an estimate. There will be surprises.
The Apollo programme turned out to perform better than expected. Why?
When you build a system, it could be an it-system or a space craft, then how do you ensure that things work as intended? Following good engineering practices is always a good idea, but relying on them is not enough. It takes more.
Armstrong goes on in the interview (emphasized by me):

I can only attribute that to the fact that every guy in the project, every guy at the bench building something, every assembler, every inspector, every guy that’s setting up the tests, cranking the torque wrench, and so on, is saying, man or woman, “If anything goes wrong here, it’s not going to be my fault, because my part is going to be better than I have to make it.” And when you have hundreds of thousands of people all doing their job a little better than they have to, you get an improvement in performance. And that’s the only reason we could have pulled this whole thing off.

Categories
Blog posts in English

Do it right: A value in Context Driven Testing

The problem with me is that I’m really bad at following instructions. When people tell me to do something in a certain way, I do it differently.
It’s a problem when I cook, because I‘m not particularly good at cooking. So I have to follow recipies, and I often mess it up slightly. I’m improving, learning strategies to remember, but this is a fundamental personality trait for me.
And not one I’m sorry about. Because it’s not a problem when I test!
I always wanted to be a great tester.
I tend to become really annoyed with myself when a bug turns up in production in something I have tested. ”Why did I miss that?!” I feel guilty. You probably recognise the feeling.
The feeling of guilt is ok. The fact that we can feel guilt proves that we have consciousness, empathy and do the best we can. People who don’t care, don’t feel guilt.
But in testing, finding every bug is fundamentally impossible, so we have to get over it and keep testing. Keep exploring!
Even before I learnt about Context Driven Testing, I knew that great testing could never be about following test scripts and instructions. I noticed that I got bored and lost attention when I executed the same test scripts over and over again, but I also noticed that I missed more bugs when I only followed the instructions.
So I stopped following the instructions. This gave me an explanation problem, however: “Uhh, well I didn’t do quite what I was asked to do…. but hey you know, I found these interesting bugs that I can show you!”

Can you hear it? That won’t impress old-school project managers with spreadsheets to check and plans to follow.

Context Driven Testing has helped me greatly in becoming a better tester. The thing is that CDT teaches me to do a great testing job without instructing me exactly what to do. Instead, the community shares a lot of heuristics I can use to help me do a great testing job, and through thinking-training and inspiration from others, it’ll help me develop my capacity to do great testing in whatever contexts in which I’m working.
It may be a bit difficult to grasp at first. A little worrying, perhaps.. But CDT is a really, really powerful testing approach.

And CDT has helped me explain what I do to my project managers. Even the old-school types!

Social services and testing

A few days ago, I read an article about quality in social services in which the following statement from the vice president of the Danish social workers union caught my attention: ”It’s about doing it right, not about doing the right things.” He was speaking of how the authorities try to help vulnerable children and their families.
The statement resonated with me, and a bit later it occurred to me that it even sums up what CDT is about:
Context Driven Testing is about doing it right, not about doing all the right things.
Note that I’ve added the word ‘all’ here.
There’s more to CDT of course, but this is the core of CDT – to me. Some readers may lift their eyebrows over the ”doing it right”-thing: ”Does he mean that there is ONE right way to do it?” Read on…
The past 10 years, I’ve worked in contexts where CDT is not really the thing we do, and if I was to be as context driven as managers in my context had asked me to be, I would not really have been context driven. You get the picture, I’m sure.
But with CDT in my briefcase, I can work to make changes and improving things in the specific context in which I’m working. As a consultant and contractor, I’m usually hired to fix a problem, not to revolutionize things, and I’m expected to do a “good job” fixing it.
”Doing it right” is of course about doing a good job, and ”doing a good job” should really not be taken lightly. CDT helps me do a good job, even when I’m not working in contexts that actively supports CDT. That’s because CDT is flexible: It empahsizes that testing should never be driven by manuals, standards or instructions, but by the actual context in which it takes place, and that’s actually quite difficult to disagree with, even for old school project managers!

Further, if my context (i.e. project manager) ask me to do something, I do it – even if it’s in a standard. Sometimes there’s a good reason to use a standard.

Context driven testing is not defined by any methods or even by a certain set of heuristics. Nor is it defined by a book, standard or manual. Neither is there any formal education, certification or ”club” that you have to be a member of in order to call yourself Context Driven.
Instead, Context Driven Testing is about committing oneself to some core values, and to me the most important value in CDT is contained in the sentence:
It’s about doing it right, not about doing all the right things.

Why would anyone wan’t anything else?

(Thanks to @jlottossen for reviewing this post and for suggesting several changes to the original version.)

Categories
Blog posts in English

Speaking to Management: Coverage Reporting

Test coverage is important. In this post, I will reflect about communication issues with test coverage.
The word coverage has a different meaning in testing than in daily language. In daily language, it’s referring to something that can be covered and hidden completely, and if you hide under a cover, it will usually mean that we can’t see you. If you put a cover on something, the cover will keep things out.
Test coverage works more like a fishing net. Testing will catch bugs if used properly, but some (small) fish, water, plankton etc. will always pass through. Some nets have holes through which large fish can escape.
What’s so interesting about coverage?
When your manager asks you about test coverage, she probably does so because she seeks confidence that the software works sufficiently well to proceed to the next iteration or phase in the project.
Seeking confidence about something is a good project management principle. After all: If you’re confident about something, you are so because you don’t need to worry about it. Not having to worry about something means that you don’t have to spend your time on it, and project managers always have a gazillion other things that need their attention.
The word is the bug
So if confidence comes out of test coverage, then why is it that it managers often misunderstand us when we talk about coverage?
Well, the word actually means something else in daily language than it does when we use it in testing. So the word causes a communication “bug” when it’s misunderstood or misused.
We need to fix that bug, but how? Should we teach project managers the ”right” meaning of the word? We could send them to a testing conferences, ask them to take a testing course, or give them books to read.
That might work, but it wouldn’t solve the fundamental communication problem. It will move higher up in the organisational hierarchy.
An educated manager will have the same problem, not being able to make her peers and managers understand what ”test coverage” means. After all, not everyone in the organisation can be testing experts!
STOP mentioning coverage
A good rule of thumb in communication is: When your communication is likely to be misinterpreted, don’t communicate.
I, as a tester knows what test coverage means and more importantly what it does not mean, but I cannot expect others to understand it. Thus, if I use the word, I will probably be misunderstood. A simple solution to this is to stop using the word. So I won’t say sentences like: Our testing has covered some functionality.
The thing I can say is: We have carried out these tests and we found that.
This will work well until someone asks you to relate your testing to the business critical functionality: Ok, then then tell me, how much of this important functionality do your tests cover?
Uh oh!
Stay in the Testing Arena – or be careful
American circuses have enormous tents and two, three or even four arenas with different acts happening at the same time. A project is always going on in different arenas as well: For example we might have a product owner arena, a development arena, a test arena, and a business implementation arena.
Some people play in several arenas: I think most testers have at some point in the career made the mistake of telling a developer how to code. Likewise, we can probably all agree that there’s nothing more annoying than a developer telling a tester how to test.
Confidence belongs in the product owner arena, not in testing. This is because testing is about qualifying and identifying business risks, and since confidence does not equal absence of risks, it’s very hard for us to talk about confidence. And coverage.
This doesn’t mean you can’t move to another arena.
You can indeed look at things from the product owners perspective, that’s perfectly ok! Just make sure you know that you are doing it and why you are doing it: You are leaving your testing arena to help your product owner make a decision. Use safe-language, when you do.
Talk facts and feelings
Confidence is fundamentally a feeling, not a measurable artefact. It’s something that you can develop, but it can also be communicated: Look confident, express confidence, talk about the good stuff, and people around you will start feeling confident.
Look in-confident, express worry, talk about problems, and people around you will start feeling worried.
Testers always develop feelings about the product we’re testing, and we can communicate these feelings.
I know two basic strategies in any type of test result communication:

  • Suggest a conclusion first, then tell’m what you’ve done
  • Give them all the dirty details first, then help your manager conclude

Which communication strategy you pick should depend on the context, e.g. your relation with the manager. If everything looks pretty much as-expected (whether that’s good or bad), your manager has trust in you, and you have good knowledge of the business risks, then I wouldn’t worry too much about serving the conclusion first, and then offer details later mostly to make sure you and your manager doesn’t misunderstand each other. And that nobody will later be able to claim that you kept silent about something.
But if something is way off, or your manager doesn’t trust you (or you don’t trust her), peoples lives may be at stake, or you just have no idear what’s happening, then stick to the details – do not conclude. And that, I think, implies not using the term ”testing coverage”.

Categories
Blog posts in English

A Sustainable Mission for Context Driven Testing?

This image changed the world. It was taken from Apollo 9 in 1968 and shows the blue Earth rise over the grey and deserted Moon. Our world seems fragile.
This image changed the world. It was taken from Apollo 9 in 1968: “The vast loneliness is awe-inspiring and it makes you realize just what you have back there on Earth,” astronaut Command Module Pilot Jim Lovell said. Image credit: NASA.

I have lately become worried about certain developments in society.
For years, scientists, politicians and others have warned us that we’re responsible for irreverisble changes to our planet: Climate changes, most notably. They’re telling us we need to change to sustainable energy sources.
Sustainability is about more than energy, and I’m worried that in the society changes imposed upon us by the combined effects of globalization and the need for serious resource conservation, we are at the same time becoming increasingly indifferent about the lives of certain groups of people. I remember how many used to be develop deep feelings of indignation when pictures of hungry or poor children were shown on tv. It has changed and such pictures don’t have much effect any more. And worse: We genrally don’t even care about poverty close to ourselves.
I feel this may be linked to a macroeconomic pattern we’re seeing almost everywhere in the world: The rich are getting richer, but the poor are still as poor as they used to be. In Southern Europe, we have enormous unemployment among young people. Economists are raising a warning that we are about to loose a whole generation.
Does this affect testers too? After all we’re safe, working in IT, technology of the future?
Well inequalities in income and life conditions are growing on our planet, and this is worrying, since inequality has historically been a trigger of wars and revolutions, and has always been damaging to democracy and society as a whole. So yes, I think we have very good reasons to be worried about the future for ourselves, our families and for our societies.
James Bach recently published a blog post which has inspired me. Testing is a performance, not an artifact, he says. It made me think about how I differentiate the great testing performance from the poor performance. Is it only a subjective measure (aka ”the music performance was good”), or could there be some objective measures in play?
I think we should judge the testing performance by the artefacts it produces: Knowledge artefacts which are valuable in the business context in which we’re testing, income artefacts to me as a tester, and entertainment artefacts (testing is fun).
But I’ve realised that there is something missing: The performance should also be judged by its contribution to society as a whole. Testing should somehow contribute to sustainability in order to be a meaningful profession for me, social sustainability as well as energy and materialistic sustainability.
This can be taken as a strictly political point of view, and I could choose to execute it by only accepting jobs in socially responsible companies and in organisations and comapnies which are making sustainable products.
But it can also be seen as a mission for our craft as a whole. Like science itself has had to face the fact that it is not just a knowledge producing activity, but has to face the fact that it is changing society by the knowledge it is producing, we as testers also have to face the fact that the knowledge we are producing is applied by certain ways. Being a responsible tester does not mean that I’m only responsible for testing.
Therefore, I think that we should take on the endavour to develop our craft from being just a knowledge producing performance, to be a wisdom producing performance.
Philosopher Nicholas Maxwell is the author of ”From Knowledge to Wisdom” in which he outlines a revolution in science. In the introduction to the second edition he writes (p 14, second ed. 2007):

There is thus, I claim, a major intellectual disaster at the heart of western science, technology, scolarship and education – at the heart of western thought; and this long-standing intellectual disaster has much to do with the himan disasters of our age, our incapacity to tackle more himanely and successfully our present world-wide problems. In order to develop a saner, happier, more just and humane world it is certainly not a sufficient condition that we have an influential tradition of rational inquiry devoted to helping us achieve such ends. It is, however, I shall argue, a necessary condition. In the absense of such a tradition of thought, rationally devoted to helping us solve our problems of living, we are not likely to resolve these problems very successfully in the real world. It is this which makes it a matter of such proound intellectual, moral and social urgency, for all those in any way concerned with the academic enterprise, to develop a kind of inquiry more rationally devoted to helping us resolve our problems of living than that which we have at present.

Should this apply testing, as well as “science, technology, scolarship and education”? Yes, it certainly should. Will it be easy to adopt this thinking in testing? No, not at all.
First of all, we shouldn’t start throwing away any of the good things we’ge learnt and developed. Like the ”scientific method” is still a necessary but not suffucient condition for the progress of science, our values and ideas about great testing are still all-important in testing. They are just not sufficient.
I think we who belong to the Context Driven Testing school are far better equipped than other testing schools to accept the sustainability point of view. After all, we’re already successful developing testing into a sustainable performance. Other testing schools still struggle with their explicit or implicit underlying short-term profit-making ambitions.
And although we’re obviously playing a polyfonic music piece, speaking many voices, not saying or meaning exactly the same about testing or CDT, it seems to me that everyone in the CDT school share the mission of developing testing as a craft as a creative, value producing performance, where value is what matters to stakeholders of the product under test. Let me call this our shared mission.
This is a wonderful mission, but in the new context, it has to give way for a better one: We’re only percieving the craft of testing in isolation or in its immediate context, and we have to raise our heads and relate our craft to the greater context of society.
So I propose that we in the Context Driven School adopt the mission to develop testing towards being a wisdom enhancing performance, where wisdom is knowledge that helps build a sustainable society.
What do you think?

Categories
Blog posts in English

Testing Hopes for 2014

DSC_0540A
Christmas is a ”Lichtfest” for us in the North. Daytime, at this time of year, only lasts a few hours and the sun never really rise on the horizon. Christmas reminds us that light days will return and it’s time to look ahead on the year to come.
I have two hopes for software testing for 2014:

  1. I hope we will stop looking for simple explanaitions why something failed: The product, the testing, the development.
  2. We cannot expect all managers to be testing experts, so we need better documented and qualified testing practices (in various contexts) in order to support better top management software testing decisions.

Looking back on 2013…

I had a busy 2013, privately as well as professionally. Let’s Test in May was fantastic! A few weeks later, I gave a successful lecture on Context Driven Testing in IDA-IT.
I have for long wanted to link my favorite philosopher Niels Bohr to testing. Denmark celebrated the 100 year anniversary of Niels Bohrs articles on the atom model this year. Niels Bohr was a Nobel Prize wining physicist, but more than anything, he was a philosopher – my favorite philosopher by far.
My second favorite is Nassim Taleb. Taleb published his new book Antifragile in late 2012, and I read it this year. But it was his previous book Black Swan that made me a fan.
In chapter 12 of Black Swan, Taleb criticizes historicism: Always wanting to find causes of why things happen. That happens a lot in testing too:

  • ”Why was that bug in the system!?”
  • ”Why didn’t test find it!?”
  • ”Who blundered!?”

Taleb points out that explaining an event is just as difficult as predicting the future. He argues that any logical deductions and computations involved in analysing an event will yield random results.
Good managers knows that appreciating and handling a team’s frustration over something not going as planned is important, but we are too often committing the error of turning a psychological healing process into a development system, mindlessly making up apparently deterministic explanations for the unexpected – the random.
Randomness and historicism
Randomness is actually two different things: (1) Indeterministic mathematical radomness. (2) Something that is acting chaotically, but still according to deterministic laws.
The ”butterfly in india” is an example of a chaotic, but deterministic chain of events: It is said that the beating of a butterfly’s wings in Delhi can cause a thunderstorm in North Carolina.
According to Newtonian and relativistic physics, determinism is a fundamental property of nature, but since most of the events involved in the forming of the thunderstorm are outside our reach, we won’t be able to reconstruct the event completely anyway.
This is perhaps where Taleb and Bohr might disagree, since Bohr did not believe in determinism as a fundamental property of nature.
With quantum physics, Bohr, Heisenberg, Pauli and other pioneers were able to show that events on the nuclear level do not follow rules of causality. An electron, for example, moves from one energy level to a lower, releasing a photon, spontaneously.
”So what? We’re not living in microcosomos. Butterflies don’t move electrons, they set complete molecules in motion. Causality should still apply on any observable level.”
This is a valid counter argument, but Bohr, in several of his philosophical writngs, points out that the lack of casualty on the subatomic level does in fact affect the macroscopic level: There are many amplification systems in nature, which amplify single quanta of energy into macroscopic effects. One such is the human eye, which can detect single photons and amplify it as a stream of information sent to the brain, where it can trigger actions. Obviously there are lots of such amplification systems in the brain and our bodies, so maybe there’s no such thing as determinism in people? And in nature in general, for that matter.
Does having a bad childhood make someone bad?
Se we’re essentially left with a world of repeatable patterns. Statisticians know that children of poor parents will usually be poor themselves. That is a well known pattern, but does it work the other  way too? Does a bad childhood make you a bad person?
Obviously no. The pattern cannot be linked to the individual, per se.
But that doesn’t mean patterns aren’t useful: Patterns simplify reality, and simplification is necessary in all planning and management.
Many projects have contracts which are negotiated several years before the testers start. Such contracts often specify which kinds of testing should take place e.g. how acceptance testing should be carried out.
Now, we can’t expect all IT contract managers to be testing experts, but if we can document research evidence of the usefulness of e.g. exploratory testing, we’re much more likely to be able to convince them to use it constructively, even when they’re working on the early planning phases.
Happy 2014!

Categories
Blog posts in English

Integration Testing and Technology Convergence

I have grown to like my Android smartphone quite a lot. It’s about a year old now, but I’ve had a few smartphones over the last couple of years. This one, however, is the first where I feel it is making my life slightly better. The thing I really like is that it has ‘everything’ inside it, and that it all works reasonably well: In addition to being a phone and a communication device, it’s a torch, a camera, a map, a calculator, a travel booking service, a map, and it allows me to stay in touch with my good friends no matter where I am.
All my previous Android and Windows CE based smartphones sucked with everything they did, except texting, calling and playing the odd game.
Convergence is changing the way we use and perceive technology: Where the selling points of a product used to describe the product itself (e.g. megapixels in a camera), features which allow products to integrate with each other are becoming more important to customers (e.g. wifi in a camera). This is because customers have observed how these ‘meta-features’ make things smarter and allow us greater flexibility of how we use the products.
I’ve been working as a tester on busines systems for the past 10 years, and I’ve observed a similar trend: Testing is transitioning from having a product focus into having an integration focus. So the changes that we’re seeing due to technology convergence in consumer electronics, seems to be happening broadly in IT.
Integration testing is playing a much more prominent role in software projects today than it was just a few years ago. Where integration testing used to be regarded as a ‘phase’ in large scale projects, we are now more and more carrying out integration testing on a continous basis throughout projects. I’ve seen this change in the projects I’ve been working on, and I have had it described to me by firneds and colleagues.
Project managers seem to have realised that system integrations are just too critical to postpone testing until the last days of a project or project cycle.
Niels Bohr said: ”It’s difficult to make predicitions – especially about the future” I’ll try anyway: I think we’re at the beginning of a development which might completely be changing the nature of testing: In the future, software testing will be predominantly focused on interoperability, system integration, robustness and other factors buried in the structure of the products we’re testing. Functionality will be much less important.

Categories
Blog posts in English

On Antifragility and Robustness

Some drinking glasses are very fragile, but fragility does not have to be a bad thing. I think most people will prefer a fragile but thin and beautiful champagne glass for over a heavy, robust one. Thin glass just suits champagne better. A fat, fruity Barolo will go better in a thicker, more robust glass.
But both the thick, more robust glass, and the thin fragile glass share a property: There is no way you can make them stronger once they have been cast or blown. In fact, since glass is an amorphous solid material, it tends to become more fragile over time. Vibrations of the molecules in the material will eventually distort the structure, making it weaker.
The diagram below illustrate how a glass subjected to daily use over a period of time can eventually break when force it is subjected to exceeds the ”breakting threshold” of the weakest point in the glass. The threshold is the maximum force the glass can take before it breaks, and in the case with the fragile champagne glass, the threshold decreases over time: The glass is becoming more fragile with age.
A fragile class subjected to forces in daily use eventually breaks
Now, let’s imagine that science discovers a way to strengthen molecular bindings in amorphous materials by vibration. Lets say, we can somehow convert the kenetic energy in vibrations of molecules into potential energy in the intramolecular bindings. The effect would be that we could make a type of glass, which gets stronger with use.
The breaking threshold will now increase over time: The more the glass is bumped around during normal use, the stronger it will get. The diagram will look like this:
A drinking glass made out of a special antifragile material gains strength when used
Note that this does not mean that the glass has become unbreakable. The only thing that has changed is that the curve illustrating the strength of the glass has changed from going downwards to going upwards and the forces the glass is subjected to during daily use no longer crosses that curve.
The term ‘antifragility’ is new, invented by Nassim Taleb and first used in a publication in the appendix to the second edition of his book The Black Swan.
Taleb says antifragility is a property of the oganic and the complex. Human bones are antifragile: Children’s bones are in fact quite fragile and grow stronger with use. But if you sit down (or live in weightlessness), they become fragile. That does not mean that bones can’t break – of course they can break. But like the imaginary antifragile glass above, bones of a well trained person are just able to take a lot more “beating” before they do so.
Dead things like champagne glasses, washing machines, computers and their software are inherently fragile and unless someone incorporates something which can implement the feedback of use into strength. Similarly, adding complex, but still ordinary software to a system cannot make it antifragile, nor will any special kinds of tools used in the development of it make neither the project, nor the product, antifragile.
Thanks to Jesper L. Ottosen for very reviewing this blog post.

Photo of Two drinking glasses
A robust glass and a fragile glass. Neither of these are antifragile.

Categories
Blog posts in English

Testing is your sensory nerves

The human brain and nervous system is probably the most complex organ in nature. The brain depends on the nervous system to supply information which the brain can then use to make decisions.

Recently, my son Troels broke his arm. I have four boys, and my wife and I more or less expected one of them to break something one day. After all, boys are boys!

Unfortunately, Troels’ fracture turned out not to be as ‘standard’ as I had hoped: Both bones in the lower left arm snapped completely, and he lost senses in the little finger, indicating damage to the ulnar nerve. Troels’ case where a nerve is damaged and cause loss of senses has made me think about analogies to testing.

Xray of Troels' arm showing the steel inserts to stabilize the broken bones
Xray of Troels’ arm showing the steel inserts to stabilize the broken bones

Sensory nerves work like testers

Sensory nerves collect information, transform it lightly and transmit it. Just like testers test things, nerves sense things. Science has identified four ‘modalities’: Touch, the sense of location in space, pain, and sense of temperature. Testers have modalities too: We test for performance, usability, functionality etc.

When signals from a nerve stops, areas in the brain previously associated with processing sensory information from this particular nerve are reorganized to process sensory information from other nerves. Some claim that the brain has a lot of spare capacity, but this reorganization indicates that that is actually not the case: Neurons are not allowed to sit idle and are reused as soon as they become ‘available’.

Loosing senses in the hand is considered a severe handicap for many reasons, and one reason is that patients risk burning their hands because they are no longer warned by their senses to avoid heat.

In the same way, not testing a subsystem at all introduces a risk that serious problems will go unnoticed. This relates to the coverage problem in testing.

Diagram showing sensory nerves in the arm and hand (source: wikipedia)

Nerve cells regenerate very slowly, only about 1 mm a day, but even after the nerve itself has totally regenerated, the brain still needs to do some relearning. It will have to relearn its map of the hand. Troels is 8 so his brain is still very flexible and easily relearns, and his senses have almost completely returned to normal by now.

Nature is a fascinating source of new insights!

Troels at the hospital after having the fracture stabilized
Troels on the way home with plaster on and steel in the arm

Thanks to Jesper L Ottosen for reviewing and editing this article.