Blog posts in English

The problem with test documentation

The agile manifesto explicitly values ”working software over comprehensive documentation.” In test, this means that actual testing is valued over test documentation. I would have put it this way: Focus is on the quality of the product, not on the quality of the documentation of the product.
We can probably all agree that it’s more fun to test and explore software than writing documentation. But it would be too radical, if we skipped writing documentation at all!
I think, however, that we testers should be more critical about the documentation we actually do produce, and that we should look for ways to improve the documentation we’re making.
The problem with documentation is not that too much time is spent writing it instead of actually testing the product. The problem is that the quality of the documentation is often just not good enough.
Most organizations have standards for test documentation and require their testers to write specific types of documents based on mandatory templates.
Templates are generally a good thing, but they can be problematic if you limit your writing process to ”filling in the gaps.” A good document contains useful information to the reader, so the most important step in the writing process is finding out what information is actually needed.
Not all sections of a template can be equally useful in all contexts (or useful at all), and very often you need to document things which there has not been left space for in the template.
But, finding out what to document is not trivial. Basically, it requires that you know the context of the project you are working on. Determining context can be difficult if you are new on a project, but asking questions can give you answers which will help you define the context.
Here are some questions, which I find useful:

  • Who will be reading the document? How will they be reading it? The challenge is to include content which the actual readers will find useful and to structure the content in a way so that they can find that information.
  • What information are they looking for? Try to get people to answer in concrete terms rather than using abstract vocabulary. The problem with written documentation is that stake holders will often not read it – most testers seem to prefer to explore systems on their own rather than reading documents, and managers will often not have time to read everything, but will only check the headline and certain details. But if readers get what they look for, chances are that they will read the document.
  • What kind of analysis do I need to carry out on the system to better understand it and test it? Writing a document about the system can often help me understanding it. I will discover missing knowledge and knowledge which I didn’t see initially. The writing process is part of the analysis.
  • Are there regulatory requirements which specify that testing should be documented to a certain detail and in a particular way? In this case, the test documentation is actually a product of the project.
  • Should the document assist knowledge transfer once I’m no longer on the project? In that case, the big question is what kind of knowledge should be ”transferred.”

I checked the section about documentation in Kaner, Bach and Pettichord’s Lessons Learned in Software Testing a few days ago. They have a better and longer list of context-free questions to ask about documentation which will help you find out what kind of documentation you need. They also list a few references to other lists of useful questions, so I suggest you take a look at that too.
Are there any specific types of documentation which is particularly useful? Indeed, there is:

  • Mind maps are becoming increasingly popular with testers, but ‘old style’ software diagrams like swimlane diagrams, state diagrams, and flow charts etc are still working well. The problem with mind maps is that they are often personal to the author of the map and not very easy to read.
  • Test scripts can be very useful in initial stages of knowledge transfer: A script can help another tester finding out how a certain procedure is performed in the system. However, a script will by itself tell the tester anything about the context of the script, and this is something which is often missed: Knowledge about a system under test is much more than knowing how to do things.
  • Check lists are actually much more useful than the name implies: A check list will list things to verify, but unlike a script will not specify in detail how to verify them. That information has to be available elsewhere e.g. in user manuals.
  • I always look for a document describing the system context in a top-down manner: What is the system doing, for who, and how? If it isn’t there, I don’t mind writing that document myself.
  • A catalogue of tools used in testing is often also very useful. Test tools are often not well documented (or not documented at all), and that can be a hurdle for new testers when they come aboard a project. A well written ”tips and tricks for testing system X” will get them up to speed faster and can act as a platform for sharing knowledge about testing specific parts of the system. I like Word documents for their self-contained’ness, but a Wiki could be better in many situations – the important thing is that such a document is actively maintained.

What are your preferred test documents?

Blog posts in English

It's fortune Friday

It’s fortune(6) Friday… what my server said when I logged in on the console today:

Welcome to NetBSD!
This machine is subject to breakdowns during periods of critical need.
A special circuit in the machine called “critical detector” senses the operator’s emotional state in terms of how desperate he/she is to use the machine.  The “critical detector” then creates a malfunction proportional to the desperation of the operator.  Threatening the machine with violence only aggravates the situation.  Likewise, attempts to use another machine may cause it to malfunction.  They belong to the same union.  Keep cool and say nice things to the machine.  Nothing else seems to work.
See also: flog(1), tm(1)

Thanks for warning me!

Blog posts in English

Computer history

As a teenager, I spent hours reading datasheets for CPU components like the bit slice processors AM2901. I also designed hypothetical CPUs in block diagrams. A 16 bit CPU was my objective, and it should have had a clock of 20 MHz. The instruction set was to be small and beautiful. My first computer was based on a National Semiconductor SC/MP processor, which had a very nice and simple instruction set, and I’ve always liked things which are the simple, beautiful and logical.
I never did build my own CPU, but at university, I spent hours playing with the old RC4000 mainframe we had in the electronics club. It was (is!) a 24 bit machine from the late 60’s, a very nice and advanced design. Its concurrent programming system was among the first in the world (appearing at about the same time as Unix was developed in the US), and my father, who used to work at Regnecentralen, gave me one of the original books about the system, which I read with great interest. It was a message passing based system, and I think the design had a few drawbacks, but the beauty of it was fascinating.
Yesterday I saw my old friend again. She is now residing in the rooms with computer historic enthusiasts. I visited the club with 14 year old son Frederik.

Photo of the RC4000 computer from 1971 in the basement of Dansk Datahistorisk Forening
The RC4000 computer was designed in the 1960's and was built from the late 60's to mid 70's when it was replaced by RC8000. It was a 24 bit architecture with up to 384 Kb core memory.

The RC4000 still works, but the enthusiasts are more focused now on its predecessor: A GIER computer, which has recently been brought back to life. The first GIER was delivered on July 31st 1961, so the design is turning 50 years old this year. About 50 of these machines were built and the club owns two: One early model and a late model. It is the late model which they are bringing back to life.
Photo of the now running late model GIER computer
GIER, a 50 years old computer design, now being brought back to life.

There are very few test programs for GIER, and this makes the repair process a bit more complicated. Poul-Henning Kamp, the author of the Varnish Cache, who is one of the active enthusiasts in the club, mentioned that it was probably because the designers found test program development to be too boring. These guys were innovators! Poul-Henning has unit tests for Varnish with great code coverage, but admits that writing tests is not the fun part of that project.
I used to program for a living and I can agree with this. Unit testing isn’t fun!
The smell of the old computers, their physical size, the noise they make, and their sheer lack of computing power (by todays standards) suggest that these machines are of a different era. Technology has evolved a lot, but people are still people. And the people matter – the technology is just a means to an end.
I still love working hands-on with technology, but my view on IT has grown a lot wider since I was young.

Blog posts in English

Acceptance tests are not enough!

Acceptance testing is a key method in Agile. One way of defining acceptance tests are Gojko Adzic‘s ”Specification by example” paradigm which has gained quite a bit of momentum lately. I personally found it to be both refreshing and nice when I heard him present it at Agile Testing Days 2009, and I also found his book Bridging the Communication Gap a nice read.

Photo of Gojko Adzic demonstrating communication gaps in his keynote presentation at Agile Testing Days 2009
Gojko Adzic demonstrating communication gaps at Agile Testing Days 2009

I’m sceptical of the concept of acceptance testing. Not because verification of agreed functionality is not a good thing, but because it tends to shift attention to verification instead of exploration.
This will shift attention from testing to problem prevention. Is that bad, you ask? Isn’t it better to prevent problems than to discover them?
Well, most people think ”why didn’t I prevent this from happening?” when problems do happen. Feelings of regret are natural in that situation and that feeling can lead you into thinking you should improve your problem prevention. And maybe you should, but more examples aren’t going to do it!
Real testing is still necessary.
To explain why, I’ll consult one of the great early 20’th century mathematical philosophers: Kurt Gödel. In particular his first incompleteness theorem. It says that no consistent system of axioms whose theorems can be listed by an “effective procedure” is capable of proving all facts about the natural numbers.
What does this mean to us?
It means that we will never be able to list all things that can be done with this particular set of data.
A specification is a kind of listing of ”valid things to do” with data, thus Gödel’s theorem teaches us that there are infinitely more things to a system than any long list of requirements. This also applies when the requirements are listed as examples.
If you’re in the business to deliver products of only ”agreed quality” to a customer, you can be all right only verifying things which are explicity agreed. If something goes wrong you can always claim: ”It wasn’t in the specification!”
But if you’re striving for quality in a broader sense, verifying that the system works according to specifications is never going to be enough.
Gojko has made a good contribution to agile. Examples can be useful and efficient communication tools, and if they are used correctly they can help making users and other interested parties better aware of what’s going on on the other side. His contribution can help bridge a communication gap. It can also produce excellent input for automated unit tests.
Just don’t let it consume your precious testing time. The real testing goes far beyond verification of documented requirements!
If you want to learn more about this, I recommend you sign up for one of the Rapid Software Testing courses offered by James Bach and Michael Bolton.
Photo from Rapid Software Testing course in London November 2010 as offered by Electromind
Michael Bolton with one of the many interesting and challenging test exercises at the RST course