Peter Nairn

The Ideal Tester - Part 5

Posted on Tue 21 May 2013 at 08:29 in Musings

The Ideal Tester - Part 5

 

Property 5: Critical Thinker

 

The first entry on the Ideal Tester (See http://www.sqablogs.com/petenairn/3309/The+Ideal+Tester.html ) talked about how the concept of the Ideal Tester came about.  This is the fifth property of the Ideal Tester to be analysed.

 

First of all, let us define critical thinking.  It is not being critical, in the sense of being negative about something, it is critical in the sense of challenging the belief.  For an introduction into what critical thinking is, you would do worse than to read Wikipedia, http://en.wikipedia.org/wiki/Critical_thinking

 

The Ideal Tester is capable of analysing the outputs from the rest of the project team and considers the output from a critical viewpoint.  Let us consider a requirements statement.  The Business Analyst may have spent some time with an end user to come up with the requirements statement.  Both of them fully understand the statement.  Give that statement to the tester and wait for the critique!  A good tester will find where that requirement statement falls down.

 

Using the SMARTERS acronym, the tester will find where there are problems with statements in any documents, requirements being the obvious application, where SMARTERS is an adaptation of the SMART acronym used for objectives and stands for:

 

S = Specific

M = Measurable

A = Achievable

R = Relevant

T = Trackable (Usually “time bound” when applied to objectives, but for requirements that doesn’t always work)

E = Evaluatable, i.e. testable

R = Recordable

S = Satisfactory

 

But, the Ideal Tester thinks further than that.  The Ideal Tester considers possible impacts so that tests can be designed which explore those impacts so that the project can understand where impacts will be felt and the risks of those impacts.

 

 

A lot of thinking done on projects has “holes”.  A “hole” is often as the result of making assumptions or facts being implied from what else has been said.  The Ideal Tester will prod and poke these holes to see if there are any interesting consequences. When designing tests, the tester will think about these assumptions, will challenge them and then question them.

 

Objectivity, not emotion.  Don’t let emotion cloud your judgement.  And here we are talking not only about your emotion, we are talking about everyone’s emotion, both those in the project and those outside.  The Ideal Tester is able to shut out the emotion of the user, the project manager, the business sponsor, etc and focus on being as objective as possible on the work being done.  But, from property 1, we want the Ideal Tester to be passionate, so how can you be passionate without being emotional?  Janet and I had a lovely discussion about this; her view is that you can have passion without emotion.  I have had to think about that.  I still have to think more about that.  I can’t see how you can have the two separate.  I accept that one without the other is desirable, I just can’t see how you can achieve it.  It maybe that is a character flaw in me that I can’t separate the two and that Janet has achieved something I have been unable to do.  I fail miserably on the emotion front, I get too emotional sometimes when arguing my case, when presenting the “facts” as I see them.  But then, I never claimed to be the Ideal Tester either!

 

 

 

Happy to receive any comments to “pete dot nairn at btinternet dot com”.  If I get a number, I will create blog entries with my responses.

 

The Ideal Tester - Part 4

Posted on Fri 12 Apr 2013 at 08:34 in Musings

The Ideal Tester - Part 4

 

Property 4: IT "Savvy"

 

 

The first entry on the Ideal Tester (See http://www.sqablogs.com/petenairn/3309/The+Ideal+Tester.html ) talked about how the concept of the Ideal Tester came about.  This is the fourth property of the Ideal Tester to be analysed.

 

The arguments about whether a tester should be technical or not or how technical a tester should be will no doubt still be raging when I am testing whether the bell on the pearly gates is using the right IP address.  So, this property is, potentially, a controversial one.  It was with Janet and I. 

 

Let me try to explain what I mean by IT Savvy and why I think it is important.  When we are testing, we should be looking at the system, we should be looking at the whole thing.  Anything on these little boxes called computers that we are privileged to play with every day can, in theory, interact with anything else on the little box.  If you, as a tester, do not understand, at some level, what is going on in that box, you will miss looking at some of those interactions and may miss potential bugs as a result.

 

But, it is not just the knowing about the computer that is important, it is more knowing what are its characteristics that are important and what those characteristics could mean for our testing.

 

As a way of explaining what I am talking about, let me give a few examples of IT Savvy topics that testers might need to know:

·         What is the difference between a 32-bit and a 64-bit architecture? 

·         What does “normalisation” mean for a database?

·         What are the SOLID principles?

·         How does a file system manage data within a file?

·         What does defragmentation actually do on a disk?

·         What are ACKs and NAKs?

·         What is the practical difference in a system between LIFO, FIFO, FILO, LILO queues?

·         How does a system manage its memory?

·         What is paging used for, and how is it used?

·         What is a CPU cycle?

·         How does parallelism work on a dual/quad CPU machine?

·         What are IP addresses?

·         Why are powers of 2 useful when designing tests?

 

And I could go on….

 

There are then specific questions that you can then ask about the way your system is being written/implemented, for example:

·         What language are the developers programming in?

·         What framework is being used?

·         What models are being used?

·         What is the architecture of the system?

·         How does that architecture fit together?

 

For each of the topics, the questions might be:

 

·         What does it mean for testing?

·         How does that knowledge help me to test my current system better?

·         What types of bugs might exist as a result of knowing the characteristics of the system I am testing?

·         What do I have to do to expose those bugs if they exist?

 

And the answers to the last set of questions depend on the skill set of the tester asking and the level of help that the tester can get. 

 

How much should the Ideal Tester know?  I can’t answer that definitively, the only answer I can give is “just enough”, where “just enough” is dependent on the company and project being worked on.

 

It is a sad fact that there appears to be comparatively few testers who do understand much about the technicalities of software.  As an example, last year I tried to hire a technical tester, someone who did have a lot of knowledge about web services as it was needed for a project and needed for the test group as a whole.  I got very few CVs through and those that did come through were generally people who had done some at college/university.  One really basic question I have asked interviewees was about powers of 2, why are they important in testing software and I did not get one knowledgeable answer.  I do wonder sometimes if we are breeding a generation of testers who are excellent at doing really bad testing.

 

The knotty question about programming, i.e. should a tester be able to program?  I can tell you that I can program, I spent a number of years as a developer, so you might say I am biased (which, of course, I am).  So, when I say that I believe a tester should be able to program then you need to bear that in mind!  Janet would disagree; she believes that if you have the tester skills, then you don’t need to be able program.  I am not going to go into all the arguments for and against, except to say that if you can program, you will be able to do things that a non-programming tester cannot do.  The counter argument is that if you know what you want to do, then you can get a developer to do it for you, and probably better and quicker than you could do it. 

 

Something less controversial (I hope) is the question about whether a tester should be able to read code.  The Ideal Tester can read code even if s/he can’t write it.  The analogy here is with a foreign language;  I can read a lot of French, I understand a lot of the written word, I can understand conversational French reasonably well when it is spoken to me. I have read Albert Camus’ book, “L’etranger” in French and in English (as “The Outsider”) and I have to say the French read better to me.  I have real difficulty in writing or speaking French, I can’t get it right.  Same with code.  I have never written COBOL, as an example, but I understand COBOL code when I am reading it, I understand what it is doing and can follow the structure.  The great advantage of being able to read code is that you can find bugs that you might not otherwise find and/or it may give ideas of tests that could be done to see if there is a the weakness you thought was there and/or you can pair with a developer and understand what they are doing at the point at which they are doing it and critique it and/or you can help the developer by pinpointing where a bug has been introduced and/or you can speak the developer’s language and/or….  There are lots of “and/or”s. 

 

An aspect that complicates the whole issue further is the Agile view that anyone in a team can do anything that is required in any discipline, so a Developer can test, a tester can Project Manage, a BA can write code, etc.  At first sight, this sounds fine, the team principle taken to its logical conclusion, but in practice to be _good_ at each discipline takes years of experience and there are not enough years in a career to become good at all the disciplines.  I did ask an Agile coach from a respected Agile company whether they had ever seen this Utopian team and the answer was that they had heard of one company in the world who had achieved it.  So, that would be a “No” then.  I have a lot of empathy with the view, however, and testers should have some knowledge about Business Analysis (sadly lacking in me, I hate to admit), Development, Configuration Management, etc. so that those skills can be brought to bear when designing tests, but that is “some knowledge”, not “be an expert in”

 

One trend that is fairly recent is that companies are hiring “Developers in Test”.  I see this as a worrying trend for a number of reasons.  My principle concern is that of motivation.  Why would a developer want to go into Test?  It is not an obvious career path.  Most developers I have ever met want to create new products, not support a test group.  The worrying aspect, that I have heard anecdotally, is that developers are being promised that if they spend x amount of time as a Developer in Test that they will then be “allowed” to go into Development proper.  A number of us have spent years trying to dispel the notion that Test is only a stepping stone to other areas of IT and I thought we were winning, but maybe not.

 

To touch on automation.  Test automation did not make our top 10 properties of the Ideal Tester and the reasons why might not be clear.  Automation is a valuable tool to the tester and the ability to automate can be important on some projects.  However, automation is rarely testing, it is letting a machine take repetitive actions and testing is not about repetitive actions.  So, whilst I would like the Ideal Tester to be able to automate, that is not a core skill, it is a useful skill. 

 

Final words on being IT Savvy.  There are almost limitless topics you can learn about how the computer works and what it does and how you can make it do things.  I have met some very bright technical people, who blow me away with their knowledge, but they don’t know everything and being technical is their passion as well as their career. No tester, therefore, can know everything, or even try, but basic knowledge, the ability to learn how software is put together, to understand technical designs and keeping current on technology is vital to the Ideal Tester doing a good job of testing.  If you ever want to do more than GUI testing and using simple SQL, then you have to know more about IT.

 

Happy to receive any comments to “pete dot nairn at btinternet dot com”.  If I get a number, I will create blog entries with my responses.

 

The Ideal Tester - Part 3

Posted on Fri 22 Mar 2013 at 09:42 in Musings

The Ideal Tester - Part 3

 

Property 3: Knows the value of the testing performed

 

 

The first entry on the Ideal Tester (See http://www.sqablogs.com/petenairn/3309/The+Ideal+Tester.html ) talked about how the concept of the Ideal Tester came about.  This is the third property of the Ideal Tester to be analysed.

 

This property is the newest of all the 10 properties and Janet and I decided it needed to be in our top ten as the Ideal Tester knows that the information that they have on the job they are doing and have done is of real value to others and understands how that value can be used by others.  In addition, the Ideal Tester can provide a testing viewpoint and influence the decisions made as a result of the work that has been done. 

 

We talked about value in property 1, having passion for testing, but we felt that knowing the value of the testing deserved to be a property on its own.

 

To some extent this property came about from an argument that Janet and I had.  We were talking about the output of testing and I said that I believe that the output of testing is to provide information about the state of the software to the decision makers.  Janet disagreed, her response was that if that was all she was doing, then she was performing the same job as a telephone directory, i.e. only facts, no value judgement and if that were the case, then her judgement was not being taken into account when decisions were made and that was losing valuable insight into the software that only she had.  At first, I disagreed with that as I made the point that our judgement was inevitably going to be included in the information that we gave and that I, as the Test Manager, was one of the decision makers using the information.  Janet didn’t like the passive nature of that, she wants to be an active part of putting her viewpoint across and having her expert knowledge sought, not inferred.  My concern was that this approach was close to crossing the line between software testing and quality gatekeeper and I don’t like the Test group to be the quality gatekeeper, I don’t see that as our role.  We batted the point around for a while and Janet won the argument.  The clincher for me in her winning was the point about us not being passive in our approach to providing an expert view, we should be active in the decision making process and being an equal partner in that process is vital to making the right decisions.

 

[Aside:  One of the great benefits for me, personally, of these discussions with Janet was that she questioned some of my long held views on testing and made me, in turn, think harder about what had become dogma in my thinking.  To some extent my thinking had become a bit stagnant and Janet stirred things up in my head for which I am really grateful]

In the Ideal Tester presentation, I am aware that this is the weakest of the slides.  I had a great deal of difficulty in summarising what we meant by this property and I made a poor job of it.  So, here is the information on the slide and I am hoping I am making a better job of explaining what we meant

 

The slide said:

Provides information on the state of the software.

·         Outstanding bugs

·         Level of test coverage

·         Risks

 

Participates in the decisions made from that information

 

See what I mean? Weak.

 

My difficulty stems from the fact that it feels like we are discussing, and giving, an abstract answer to a concrete question.  The question is “what value did you give to the development of this software?”  The answer of “We found 100 bugs of which 10 low severity bugs that are still outstanding, we ran tests that exercised all of the requirements, we only tested the high risk requirements to any depth” is not a great answer, but it is the type of answer we give.  [Another aside, any quantitative measure of coverage always makes me uncomfortable as I do not know, and I suspect nobody else does, what 100% coverage means, so how do you know what percentage you have covered if you don’t know what 100% would be?]

 

The Ideal Tester has a really good idea of what has been tested, what has not been tested and what is the risk of what has not been tested.  The Ideal Tester knows what the outcome of the testing performed means to the project, the stakeholders, the business and the end user in terms of the value of the software under test.  Perhaps more importantly, that value can be articulated in such a manner that judgement calls can be made about the current state of the project.  

 

Here is an example of a conversation that I had with a tester:

 

“Tester:  There are 362 requirements, I have executed 489 scenarios, an additional 82 were not executed due to being out of scope and 54 were not executed due to the software in this area not changing.

 

Me: So, did the testing cover the requirements?

 

Tester: Well, you can’t say that because these are _scenarios_, not tests, we are using data driven tests that we run on the software.  I ran 489 scenarios!

 

Me: OK, I (sort of) understand, but did the testing cover what the requirements said the software should do?

 

Tester (getting a bit frustrated):  I told you, these are scenarios, there is not a 1 to 1 mapping between the scenarios and requirements. I have a spreadsheet that shows what I ran that I am going to load into QC.

 

Me (also getting frustrated):  This is a simple question, have you tested all the requirements?

 

Tester: Yes.”

 

There are number of things wrong with this conversation.  My question, you could argue, was the wrong question, however, in the context of the project it felt like the right question [Note:  I am not involved in this project, I was reviewing the testing as part of some governance work I was doing].  The tester was fixated on the scenarios run, on numbers, not on the testing performed.  Frankly, I don’t care whether the tester ran 1 test or 1 million tests if the testing did the right thing.  This tester, by the way, is considered to be a very good tester by people I respect, so I have no doubt that the testing was good and that when the tester said that all requirements had been tested that they really had, but the message given could have been done so much better.  To my mind, the focus is all wrong, we should be focusing on doing good testing, not on managing numbers.  If the conversation had gone something like this, I would have been a lot happier:

 

“Tester:  There are 362 requirements, I have executed 489 scenarios, an additional 82 were not executed due to being out of scope and 54 were not executed due to the software in this area not changing.

 

Me: So, did the testing cover the requirements?

 

Tester: Yes, the high risk requirements were tested first, I went into considerable depth to look for where there might be problems and although a few defects were found, I was happy with the results I saw and all the bugs were resolved quickly enabling me to retest them.  My view is that the software is well put together and it is low risk to put it into the Live environment.  I believe that the outstanding risks are manageable, with the mitigating actions that have been put in place.

 

Me:  Sounds like we should ship it!”

 

With this version of the conversation, I have got numbers, which may or may not be useful, and I have got the tester’s viewpoint on the value of the testing performed which is crucial in making decisions, but more importantly I would have understood the value of the testing performed.

 

At this point, it is worth talking about “the tester’s gut”. This is a phrase that Janet and I used quite a lot.  A reasonably common occurrence is that a tester will come to me and say something like “I don’t like the smell of this software, I can’t put my finger on it, I can’t actually find a serious fault but something feels wrong”.  My reaction is usually “Find out where the fault is.  Is there anything you need to help you?”.  Very often they are right, they just need permission to feel uncomfortable and do something about it.   It would be fascinating to analyse this gut feeling and find out where it came from, and maybe someone has, because as a tool it has significant value in a tester’s toolbox and it would be great to be able to call upon it on demand!

 

The other aspect of being aware of the value of testing is that if the Ideal Tester is doing, or about to do, some testing that they cannot articulate the future value of, then they question whether they should be doing it at all.  One of the perennial testing questions is “when do you stop testing?”. One answer could be “When the next test I planned to do has no value (or less value than the test I have just done)”.

 

I would now like to tell you how to calculate the value of your testing….

 

But I can’t.  Only you can determine that in the context of your project, your role in the project, your type of project, the phase you are in, the people you are working with, the type of customer you have, etc, etc.  Is that a cop out?  Maybe, but trying to come up with a calculation/algorithm/rule would be giving the wrong answer most of the time and that would be worse than copping out.  What makes sense for you and your project to calculate the value is hard, but then who said that testing was easy? (Oh, yes, a few ignorant people have!). And, just another aside, beware of using KPIs as a measure of value – in my experience they are not, but that conversation should be the subject of another post.

 

What do you measure value in?  Again, you need to determine that, is it in pounds sterling, dollars, euros, market share, quality factor,  customer satisfaction, or what?

 

To summarise, the Ideal Tester knows the value of the testing done, can articulate it, can analyse it – and stay sane doing that!

 

Happy to receive any comments to “pete dot nairn at btinternet dot com”.  If I get a number, I will create blog entries with my responses.

 

The Ideal Tester - Part 2

Posted on Wed 13 Mar 2013 at 03:56 in Musings

The Ideal Tester - Part 2

 

Property 2: Knows a variety of test design techniques, how and when to apply test techniques

 

 

I am a fan of techniques, I think they are valuable tools in the tester’s toolbox and every tester should have an ever increasing number to call upon.  There are some real dangers of techniques too and testers should be aware of the dangers as well as the benefits.  But, I get ahead of myself.

 

The first entry on the Ideal Tester (See http://www.sqablogs.com/petenairn/3309/The+Ideal+Tester.html ) talked about how the concept of the Ideal Tester came about.  This is the second property of the Ideal Tester to be analysed.

 

What do we mean by “test design techniques”?  I mean things like Boundary Value Analysis, Equivalence Class Partitioning (or Equivalence Partitioning), Cause effect graphing, All Pairs, etc, etc.  I.e. any mechanism which gives a method of choosing a test from the infinite possibilities available.

 

Some of these are taught for the ISTQB “certifications”, sorry, can’t take that word seriously when talking about ISTQB.  Just to take a small aside, I have ranted about ISEB/ISTQB before and I am not going to have another rant now, but some of the problems with techniques and how they are used, or not, I have to put squarely at the door of the teaching done for ISTQB.  To be clear, this is not just a problem with ISTQB in itself, it is also a problem with a) how it is taught (to pass a multiple-choice exam, not to increase testing skill) and b) how ignorant people view it (as a measure of competency, which it isn’t).  I’ll stop right now, because I feel another rant coming on…

 

Back to techniques!  Junior testers may first start to learn about techniques with an introduction to Boundary Value Analysis and Equivalence Partitioning. This is OK, it starts them down the path of learning that you can’t test everything, you have to make choices about the tests you are going to run and these are methods for making those choices.

 

More techniques then can get added on and more, and more.  I don’t know how many ways of testing there are, as I am always hearing of something else someone has found useful.

 

The Ideal Tester is always looking for another technique that they can put in to their toolbox.

 

What is more important is that the tester knows how to use the technique that is appropriate to what they are testing and that the technique is appropriately used. 

 

What I mean by that last statement is probably best described by an example and a real story.  When I am interviewing for a tester, I often ask them what test design techniques that know about.  A depressingly few actually do know the name of a technique; even those that have passed ISTQB (don’t get me started again!).   I do get Equivalence Partitioning as a name sometimes and I do get Boundary Value Analysis sometimes, rarely do I get both!  So, I ask them to describe what BVA or EP is.  I get the standard example as taught to them in their ISTQB class, which I am very bored of hearing time after time. 

 

Now, let’s talk about BVA just for a moment and look at what the technique is.  The technique is that when you have a range of values for an input field, you test on the boundary, 1 above the boundary and one below the boundary, yes?  NO, NO, NO.!  Look at the name of the technique - Boundary Value ANALYSIS.  It is about analysing boundaries.  What do you have to do before you do the on, -1, +1 test?  You have to find the boundary.  It is finding the boundaries in a system that is the skill, not doing what you do once you have found them.  So, let’s take the standard ISTQB example I hear time after time.  You have a field that takes the values 1 to 100, so you test 0, 1, 2, 99, 100, 101.  Job done!  The Ideal Tester would then say “are those two the only boundaries?  What other boundaries are there on this field?” Perhaps the field has a limit of 3 numbers, what happens if I put 4 numbers in?  Do I get a different error than if I put in 101? What if I put in no numbers (which is the lower boundary)? What if I put in 32768? Is that a boundary?  Now, the Ideal Tester is analysing the boundaries of the field – s/he is doing Boundary Value Analysis.

 

Equivalence Partitioning is the same; there is a simple example that is taught about dividing your input into three classes and picking one value from each class.  That is the end, and easy, part of EP, the beginning and difficult part of EP is identifying the classes. 

 

I could go on with other examples of design techniques, but hopefully the message is clear, the mechanics of the technique are easy, identifying how to get to the point of using the mechanics is difficult. But, the tester is only taught the mechanics.  Is it any wonder that most of them forget it?

 

Here is the story.  I employed an apprentice a couple of years ago.  I ran the apprenticeship program for the company and we took four 18 year olds who had decided not to go to University.  The program was an experiment for us to see if we could get Bright Young Things into the company.  The four that joined went into Development (two of them), Release Management and Test.  My apprentice was sharp; she had a great brain and soaked up information like a sponge.  I started by teaching her the basic concepts of testing and introduced her to Ron Patton’s Software Testing book which I always think is a gentle introduction for a newbie.  I then got her to learn about techniques.  The way I did that was to give the name of a technique, tell her to go away and research that technique and come back and tell me what it was and what it did.  She ate up BVA and EP really quickly.  She slowed up a bit on State transition tables and struggled with All Pairs and Cause Effect Graphing (who doesn’t! CEG is a nightmare to understand at first).  After she had got the understanding of about 6, which took about 6 weeks in total I was really happy she understood the techniques, she understood what they did and understood how to do the mechanics.  She was very impressive.  I then gave her some software to test and asked her to tell me what techniques she had used on the software, why, and what results she had got.  She had a week to do that (a skilled tester would have done it in a day at most, but, remember she was only 18). Two days later she was back saying she had finished.  I was sceptical, but I was impressed with her, so we sat down and went through what she had done. 

 

A little background.  The software she was testing was an exercise, not “real” software.  The software had bugs in it, but not obvious ones and you had to work at it to find the bugs.  I had a list of the bugs that I knew about and wanted her to find.

 

My apprentice had used every technique that I had taught her and she had so diligently researched.  This was her first error.  Not every technique was appropriate for this piece of software. For example, she used State transition testing when there were no state transitions, it was a straightforward flow.  She had focussed on using the techniques rather than testing the software.  This was her second error.  We must not forget that our aim is to test the software, not to exercise our knowledge of test design techniques.  She had not gone into sufficient depth with the appropriate techniques and not, therefore, found some of the easier bugs to find.  This was her third error.  We don’t stop when we have found x errors using a technique, in fact the reverse, we keep going.

 

On the plus side, and it was a big plus, she had found bugs and she had applied the appropriate techniques well, better than more experienced testers, so I was really pleased with what she had done.  Having gone through her results, I asked her to go away and try again.  What she came back with this time was a superb piece of testing and a list of bugs which included some I didn’t know about, and they were real bugs too.  She had listened to what I had said, taken it on board and done a better job than most testers I have ever met and remember, she was only 18 years old!  Let’s not be fooled, this does not make her an expert tester, she still had a heck of a lot to learn but she had made a flying start to her career.

 

The story of my apprentice shows that using a technique just because you know it is not a good use of testing time if it is not an appropriate technique.  Just using the technique is not a guarantee you will find all the bugs you can by using the technique and becoming focussed on the technique rather than the software will reduce the effectiveness of the testing.  Here is an interesting story by Michael Bolton on Pairwise testing http://www.developsense.com/blog/2007/11/pairwise-testing/ where the technique becomes more important than testing the software – it is a warning to us all.

 

I recently read a blog entry, http://danashby04.wordpress.com/2013/03/04/the-shoe-test-does-this-test-really-add-value/, which questioned the value of the “shoe test”.  I think it was a good question and we should always ask the question as to the value of any test or test technique.  But, we should only ask that question in the context of the software we are testing now.  That technique may be useless to us now, but on a future project it could be a vital technique, so store it away in your toolbox.

 

Occasionally when I interview testers I get the person who does tell me that they know Boundary Value Analysis and Equivalence Partitioning and State Transition Tables and whatever.  And they can accurately describe them.  Before I get too excited, my next question sorts out the people who can remember what they have read/been taught and those who really understand.  I ask them “Tell me what type of bugs you will miss if you only test using BVA and EP”.  The Ideal Tester would ask me what the context was and evaluate the context to be able to give me a very long answer on the relative advantages and disadvantages of the techniques and how they would mitigate the disadvantages (if necessary), assess the risks of only using the two techniques and what techniques they might use to minimise the disadvantages.  Mostly, I get nothing but silence or some garbled sentence that clearly shows they do not know.  Of course, the ISTQB exam does not require you to know any of this, so why would you know it???? Ooops, nearly went into a rant again.

 

 

Happy to receive any comments to “pete dot nairn at btinternet dot com”.  If I get a number, I will create blog entries with my responses.

 

 

 

How do you encourage passion in testers?

Posted on Wed 6 Mar 2013 at 02:28 in Musings

Phil Kirkham is someone who I have known for a long time, mostly electronically, but we have met once – in a pub!  He read my post in the Ideal Tester series (See http://www.sqablogs.com/petenairn/3309/The+Ideal+Tester.html ) about passion in testing and asked the reasonable question about passion – “is this something that a good test lead/manager can cultivate.  Is it something the leaders should be encouraging (and if so how) rather than hope that people come along and find their passion?”

Deep question, Phil.  This was something that Janet and I talked about and it is a difficult question to answer.  It is difficult because every person is different, they have different beliefs, desires, and wants, so what turns them on is different.  So, I am going to talk about me, about my passion, how I became passionate and how I try to instil passion in others, why I think it works and why I think it doesn’t.

I started my software career as a developer and quite enjoyed it.  I did the traditional route of junior, developer, senior, team lead.  I then went into Project Management and quite enjoyed it.  It was while I was project managing that I met a Quality Manager (and I mean QA, not Testing).  He was passionate about quality, he lived for the subject and could, and did, talk for hours on the subject.  Some people would be boring, but he was so enthusiastic and so obviously immersed in the subject that he managed not to be boring.  I got interested and was lucky enough to land a job as a QA and Test Manager and quite enjoyed it.  Note:  so far, I have said “quite enjoyed it” three times and that was deliberate, I was not passionate about anything I had done so far. 

The turning point for me was meeting a chap called Boris Beizer.  I attended his training course on testing and spent a lot of time with him in the frequent breaks. He has written books on software testing, “Black Box Testing: Techniques for Functional Testing of Software and Systems” and “Software Testing Techniques” being available on Amazon and “Quality Assurance and Software Testing”, sadly now out of print.  Let me tell you a little about Boris.  He was irascible, confrontational, maddening, egotistical and one of the most inspiring characters I have ever met.  Oh, and before his lawyers come after me, I liked him enormously for his up-front manner, for telling things as they are, for his way of challenging what you thought you knew and his passion for testing.  What he did for me was to make me see that Testing is a vast subject, that you never can know all there is to know, that questioning is a vital skill, that testers make a big difference to software and lastly, and most importantly, that I was good at it.  At the end of that training course with Boris, I was hooked, this was what I wanted to do for the rest of my career and I was going to be the best I could be at it, I had started down the road of being passionate.  Oh, and for a while Boris and I corresponded over email and I am not sure why we stopped, I think it was because I started looking for alternative views on the testing world.  I hear Boris has now retired, which makes me feel old!

Over the years, my passion has continued and grown as I keep trying to get better at what I do.  I have, periodically, gone down a different path in order to see what I might be missing, I spent a while as a Project Management Consultant, I spent a while as a Business Change Manager, I spent a while as a Technical Support Manager, none of them thrilled me and I came to the conclusion that the only career path that really excited me was in Software Testing.  So here I am, and here I expect to stay.

I have worked with many testers in different companies and tried to communicate my passion to them with reactions varying from hostility, to indifference, to joining me in that passion.  I have to say that the majority (and I have no actual figures on this) fall into the indifferent category.  I have a number of theories about why that is, including that maybe I am rubbish at communicating my passion or that there are large number of people who only want a 9 to 5 job and don’t really care about what they do.  The truth is, I really don’t know why. 

As a leader or manager, I think it is vital not to kill any passion that is shown.  It is too easy to kill that spark (Phil says it happened to him, fortunately he is made of stronger stuff and he rekindled that spark).  You can kill it by criticising (“Don’t ask questions, just do your job”), you can kill it by not listening (“I’m too busy to listen to you”), you can kill it by ridicule or belittling (“What do you know, you have only been in testing for x years, wait until you have been in as long as I have”).  Encouraging passion, or the start of passion, is a skill that a lot of manager’s do not have, although most leaders do (and the subject of leader vs manager is an interesting one too) and maybe that is part of the problem.

So what can the passionate do to encourage others?  I think that is a hard question and I am sure there are cleverer people than me who have a better answer.  All I can say from my experience is that the person has to have at least an interest in growing their passion to start with, if that is not there, then you are flogging a dead horse.  If the interest is there, you encourage it by doing the reverse of killing it, e.g. listen, praise, give constructive feedback.  In addition, try to give that person the time to go off on their own to experiment, to explore, to research and reward when they do that.  A little technique I have found useful is to not disagree, even when you think the person is way off the target, but to say something like “that is interesting, have you looked at any opposite views to that?”.  And lastly, find someone who they can use as a role model.  If that is you, then great, if you cannot for whatever reason, find them a mentor, someone to talk to who will talk as an equal rather than a manager.

I go back to the relationship that Janet and I had. It was not manager and staff (at least I don’t think it was), it was two testers shooting the breeze on a mutually interesting topic.  I think we fed off each other, we encouraged and enthused each other and maybe most important of all, we had fun doing it.  If you can find someone that you can have that sort of relationship with, it is worth its weight in gold.

It would be really interesting to hear other people’s views as this is an area I wish I knew more about.

Happy to receive any comments to “pete dot nairn at btinternet dot com”. 

By the way, sorry for not opening up the blog for anyone to comment, last time I did that I got spammed and spent hours cleaning it up.  Until I can spend the time doing something to protect against spam, I have to resort to email comments or SQABlogs members only.

The Ideal Tester - Part 1

Posted on Mon 4 Mar 2013 at 02:22 in Musings

The Ideal Tester - Part 1

 

Property 1: A Passion for Testing

 

 

The first entry on the Ideal Tester (See http://www.sqablogs.com/petenairn/3309/The+Ideal+Tester.html ) talked about how the concept of the Ideal Tester came about.  This is the first property of the Ideal Tester to be analysed.

 

The best testers I have ever met have a passion for testing.   But, what does “a passion for testing” mean?  I introduced Janet in the first entry as my partner in crime and she and I discussed this topic probably more than any other.

 

One of the problems we had in defining this passion was that we asked ourselves “why do we do this job?” and “what is it about testing that makes us passionate?” and “what value do we see that a tester feels they add to software development?” .  These are really difficult questions to answer because you are getting into the psyche of the tester. This morning, I was listening to a radio programme where a lady was describing her passion for cowpats, how they were exciting to her, she was fascinated by what goes on when a cow defecates and what happens to the pat after that.  She tried really hard to explain why she had this passion but I have to say I would have great difficulty getting passionate about cow sh*t.  However, I understood how you could get absorbed in a subject as this lady quite clearly was. I have been absorbed in this profession of testing for more years than I care to recall and explaining that passion to others I find really difficult, much as the lady had difficulty in explaining her passion for cowpats.  And developers often have the same reaction to testing as I had to cow pats!

 

So, Janet and I struggled on this for a while in trying to define what the passion for testing was all about.  We eventually came up with a list of the things that made up a testers passion.

 

1.      Tester wants to do a job which has easily visible value to at least one other person plus the tester

This statement was possibly the most profound statement that we talked about.  It was Janet’s, as most of the really profound statements to do with the Ideal Tester are.  You need to really pick apart the statement to understand how deep it is.  So, here goes:

      “Easily visible value” – what we meant by this is that if the job that the tester does not have value, then why do it.  OK, that bit is easy.  Is that value visible?  Visibility of the value is important, as lack of visibility means that people question why you are doing something and, therefore, you have to justify why you are doing it and that just takes time away from adding the value.  If we are doing a valuable job, we do not want to be distracted from that job, so we need to show, as early as possible, that we are adding value.  How we do that would be a subject of another long post!

      “Value to at least one other person” – This is where we need to identify that the value we are adding is of value to a person.  Going slightly off track for a moment, I like Jerry Weinberg’s definition of quality as “Of value to someone” and then James Bach’s addition “who matters”.  The key aspect of “value” is that it relates to a person and identifying the person who matters is a key thing to do in testing.  It is important to the passionate tester to have someone who is positively affected by the value of the work that they are doing, otherwise the passion will not exist.

      “Value …. plus the tester” – if there is not value to the tester, then there is no incentive to do the job.  Or, more importantly, to do the job well.  The value to the tester can be a variety of different values, for example: learning new technology, new technique, better delivery.  Identification of what value the job is to the tester is what makes the tester passionate about what they are doing.  It is important that the value is seen by the tester, not told to them!  I want the tester to identify that value themselves.  And the best testers do.

2.      Has a bent and desire to ask questions and keep asking, call it “persistent curiosity”.

Janet has some wonderful, short, pithy phrases that require a lot of thought to really understand them.  “Persistent curiosity” is one of her brilliant phrases.  There is a great cartoon I saw where a small child is saying to its mother “Why do I ask so many questions?”  That is the key aspect of a passionate tester, we don’t stop asking questions.  [Aside:  One of the subjects that Janet and I talked about a lot was the tester discovering his/her inner child.  I will discuss this further in other entries].  If you are satisfied with an answer, then you need to question yourself as to whether you are asking the right questions, you need to keep asking questions to find out what happens.  Please Note:  I do not mean continually asking the designer what he meant by “copy data from central database to Data mining cubes”, I mean that you keep asking questions and from each answer, ask a better, more important question.  The question may be of the designer, the developer or of the software under test to discover what it does until it comes to a conclusion. The analogy Janet used was Meerkats continually harass puff adder until it strikes back or goes away.   The tester meerkat is trying to discover what the break point of the puff adder is.

3.      Wants to solve puzzles

I have a heuristic that testers are great puzzle solvers.  A lot of testers I know enjoy chess, Sudoku, crosswords, jigsaw puzzles, brain training puzzles, etc.  [OK, I know a lot of non-testers who also enjoy them, but ….]  The passionate tester recognises that testing software is trying to solve a puzzle.  One of the questions James Bach says we should ask is “if there were a bug in this part of the system, how would I find it?”  That is the puzzle, and the passionate tester then enthusiastically goes about trying to find that bug.

4.      Quest for learning; wants the feedback loop of understanding, questioning to learn more, getting answers

We will come back to learning in part 9 where it is discussed in more detail to do with how the tester learns and what s/he does to learn.  The passionate tester *wants* to learn, wants to become more and more knowledgeable about their subject, their product, their system.  And, they are never satisfied with what they know.

5.      Happy in a fluid environment

It doesn’t matter how structured the environment, how mature the organisation, how “Agile” or otherwise the methodology.  Software development is always fluid.  What you planned today will probably be different to what you actually do.  It is a fact of life for a tester.  The passionate tester not only accepts that fact of life but thrives on it and enjoys it

6.      Is a realistic perfectionist! (wants perfection but is realistic in terms of what can be achieved on the road to perfection)

Another of Janet’s wonderful phrases was “realistic perfectionist” and we discussed what that meant for some time.  We all know that software will never be “perfect”.  However you define “perfect” for software, which may be in terms of outstanding bugs, coverage achieved, requirements met, etc.  the software will never be 100% perfect.  I think it is very difficult to actually define what “perfect” means but the point is that we won’t achieve it.  A passionate tester wants the software to be perfect, the testing to be perfect, the customer experience to be perfect, EVERYTHING to be perfect.  The passionate tester knows that this is a not a destination, but a journey and the realism is that you will only ever get “good enough” (http://www.satisfice.com/articles/good_enough_testing.pdf).  Knowing when to stop is a key skill and the passionate tester will not feel aggrieved about stopping before perfection, but rather be happy that they have recognised that stopping now is OK.

 

7.      Bug junky! A good tester gets a buzz out of finding a bug, particularly those that are difficult to find, i.e. Has solved the puzzle        

 

You hear the following a lot when talking to Testers and Test Managers “Our job is to provide information so that the decision makers can make decisions”, or, “Our job is to mitigate risk”.  I have one word to say to that – “baloney”.

Ok, maybe not entirely baloney. When we are talking to stakeholders, Project managers and the like we may well use such phrases, partly because they are true, but mostly because that is politically acceptable.

What is the real truth, with a passionate tester, is that they love finding bugs.  There is a thrill about finding a bug, a joy that is difficult to describe.  It is not (and never should be) about beating the developer or “getting one over” on the developer, it is about finding a problem in the software.  The bugs that give the most thrills are the ones that took effort to find.  At one company, I gave an award for the most creative bug where the tester had to work particularly hard to find a bug.  If we solve the puzzle and find the bug, that gives us such a boost – it makes the whole job worthwhile.

 

An ideal tester has passion for the subject.  We need more passionate testers in our profession, my observation is that there aren’t enough people who have a passion for testing as a subject.  I see testers passionate about test automation, about agile as a methodology, about tools, about domains.  Much less so with testing as a discipline.  I don’t know why that is so, I try to transmit my passion to others and some people respond, but I find precious few who have it in the first place.

 

Happy to receive any comments to “pete dot nairn at btinternet dot com”.  If I get a number, I will create blog entries with my responses.

The Ideal Tester

Posted on Thu 28 Feb 2013 at 08:34 in Musings

The Ideal Tester - Part 0

 

What is the Ideal Tester, how did it come about?

 

Sometimes the job of a Test Manager is to do paperwork.  I have not met many Test Managers who actually LIKE paperwork, but it is a job we have to do.  Mostly, doing that paperwork is boring, sometimes doing that paperwork gives you something useful, rarely it gives you something amazing.  This is an example of something amazing.

 

As part of my job at a past company, I had to review all the annual appraisals that were done in my team.  I had a number of Test Leads who did the annual appraisals for their testers.  My job was to review the appraisal to ensure that it had been done properly and to give my view of the person.  The key parts of the appraisal for me were the appraisee’s comments on themselves and their career aspirations.  That gave me a view of where they saw themselves going.  Now, to be 100% honest, I did see this part of my job to be a chore, mostly what I saw on the forms were “I thought I had a good year and I want more training in X” and mostly what I wrote as my comments were “This tester has had a good/challenging year and I look forward to more of the same/an improvement over the next year”. 

 

Then Janet’s appraisal came across my desk.  I had a good view of Janet’s abilities and thought she was an excellent tester.  Her comments on her appraisal included the statement “I am interested in the psychology of testers”.  Now, this is something that has fascinated me for years – I have studied it in a really amateurish way and just enjoy analysing it when I get chance to think about it.

 

So, I dropped Janet an email saying “Me too, fancy getting together over lunch to discuss?” [Aside:  You might wonder why I emailed her rather than went to her desk to talk.  The reasoning I use is that if a Manager comes to talk to you then it is difficult to say “No” when you really want to.  It is easier to say “No” to an email]   .

She said “Yes” and we arranged to meet for lunch.  This started an 18 month exercise of us getting together once a week for lunch to have a chat.  Holidays and “too busy” happened on some weeks, but most weeks we did get together over that period. 

 

The weekly chats started out as just that, a chat. There was no agenda, no purpose other than to just talk about things that were of mutual interest in the world of testing.  Our only “constraint” was that we were not to talk about work, only about testing.  We broke that rule occasionally, but mostly we didn’t.  As the time went on we realised that what we were talking about was potentially interesting and maybe even useful!  I started taking notes.  Some of those notes were totally useless but some of them started to be interesting and slowly I saw some structure appearing out of the random nature of our discussions.

 

An interesting facet of our discussion is the way that Janet and I think.  I am a very structured thinker, I think about something and then I think of the consequences of that and then the next step in that, so I think in a structured manner.  Janet thinks in abstracts, she has apparently random thoughts (they aren’t random, by the way) and then pulls them together into something coherent, somehow managing to not lose any of the seemingly unconnected thoughts.  I can’t think the same way as Janet and, I suspect, she couldn’t think the same way as I can.  This led to some frustration on both of our parts as it is quite difficult to understand the way the other was thinking when we think so differently.  But, I think we both recognised that our different ways of thinking enhanced the conversation, debate and discussion from the very fact that we both DID think in a different manner.  If we had both thought in the same way, I doubt that our conversations would have been so lively, so useful, so diverse or so much fun.  I sometimes wish I could think more free-form, as Janet does, but when I have tried it just fails miserably.

 

A number of interesting things came out of our discussions and one of them was the concept of determining what makes a tester a good tester.  Like many, probably all Test Managers, I had put together skills matrices, lists of essential and desirable skills for testers, but what I hadn’t done was try to picture in my mind’s eye what a great tester would be like.  Janet and I realised we were doing just that and we dubbed this “The Ideal Tester” and used that as a handle on which to base some discussions. 

 

We decided to describe what the properties were of the “Ideal” Tester.  We set ourselves one rule – we would only allow ourselves 10 properties.  Why 10?   We knew that if we allowed an infinite number of properties, we would end up with a large number that would become unmanageable, we would have to try to group them, analyse each of them and it would become a logistical and organisational exercise rather that a sapient exercise, which is what we wanted it to be.  So, we agreed that the rule was only 10 properties.  The mechanism for adding to the list of 10 was that if either of us wanted to add a property, we had to nominate another property for removal.  This kept us focussed on our preferred top ten.

 

Over the months, we refined, we argued, we thought and ended up with the list you will see in this blog entry. 

 

I presented the Ideal Tester to my team and it was well received, a number of people really interested.  I have presented the Ideal Tester to the British Computer Society Special Interest Group in Software Testing in September 2012 (slides here http://www.bcs.org/upload/pdf/pnairn-131212.pdf) and got some very nice feedback.  A presentation cannot cover all that Janet and I thought about and discussed, so I decided to put together a series of blog entries to put the meat on the bones. 

 

So, what follows, as time allows, is a set of blog entries describing each of the properties of an Ideal Tester, what Janet and I meant by each property and some thoughts about each property.  What I write will be my interpretation of the discussions that Janet and I had.  I know that Janet would phrase some things differently, would stress different attributes and sometimes just downright disagree with me.  Apologies, Janet, if I misrepresent your views, it is not intentional, but as you know very well, I sometimes get things wrong.

 

Let me say, straight off that this list is not my list, it is not Janet’s list, it is our list.  I would be willing to put money on the fact that neither she nor I would have come up with the same list on our own, nor would anyone reading this entry.

 

Let me also say, I don’t believe that the Ideal Tester exists, or at least, I haven’t met him or her yet.

 

The blog entries that follow, therefore, will look at each of the 10 properties in the following list:

 

10 Properties of an Ideal Tester

 

1.Passion for Testing (http://www.sqablogs.com/petenairn/3310/The+Ideal+Tester+-+Part+1.html)

2.Knows a variety of test design techniques, how and when to apply test techniques

(http://www.sqablogs.com/petenairn/3318/The+Ideal+Tester+-+Part+2.html)

3.Knows the value of the testing performed

http://www.sqablogs.com/petenairn/3319/The+Ideal+Tester+-+Part+3.html

4.IT “Savvy”

http://www.sqablogs.com/petenairn/3326/The+Ideal+Tester+-+Part+4.html

5.Critical thinker

6.Ability to ask “What if?”

7.Subscriber to Tester blogs and testing web sites

8.People Interaction

9.Willingness to learn

10.Has analytical/problem solving skills

 

 

Happy to receive any comments to “pete dot nairn at btinternet dot com”.  If I get a number, I will create blog entries with my responses.

Testing and tools - why the beef?

Posted on Fri 20 Jul 2012 at 01:28 in Testing

I seem to be seeing more and more comments about how tools are encouraging poor testing, or tools are discouraging good testing. 

There was a discussion on Twitter between James Bach and Simon Knight, which Simon blogged about here http://sjpknight.com/sbtmvsqc/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+stcfeedsbloggers+%28Testing+Feeds+-+Bloggers%29 about Session Based Test Management and Quality Center

There was a statement by someone on Twitter (can’t find the reference) that QTP encouraged poor testing. 

I spoke to a tester recently who complained that the tool he is forced to use means his testing is not as good as it could be.

Can I make one statement about all of this?  A tool does not determine what testing you do.  You, yes you, dear tester, determines what testing you do, NOT the tool.  And, before anyone thinks “the testing I do is determined by my Test Lead/Test Manager/Scrum Master/Project Manager/vendor tool/home grown tool/someone else”, please, please remember that you do the testing, if anyone or anything else is telling you what to do, you aren’t testing.

If you believe that testing is an intellectual activity whose success depends on how well you are thinking, how well you are learning as you test and how well you react to the results you get then the tool is irrelevant. 

I accept, and have seen, that you can do poor testing when using a tool.  Also, you can do poor testing if you do not have a tool. 

Accept then, that you can do good testing when using a tool.  Also accept that you can do good testing if you do not have a tool.

See!  A tool does not do the testing, a human being does. 

If, truly, a tool affects the quality of the testing, then I have to say that the fault is with the tester, not the tool. 

Would William Shakespeare, I wonder, have blamed the type of pen if he had written a bad play? 

 

Sometimes I just want to scream...

Posted on Mon 16 Jul 2012 at 01:49 in Test Management

Had a long meeting with a test manager last week.  He has spent a lot of time on tools with his team, invested a lot of effort on making his automated test suite work for him and from what I could see done a pretty good job.  He had a number of performance tests done which, again, looked pretty good, good use of the tool, good results coming back.

 

He then spoilt it for me.  His test case management tool work was equally extensive and he had taken it to the level where every test case was in the tool, the test case execution was done through the tool and the tester had to execute every step which was at the lowest level possible (move mouse to field, enter ) and then record whether actual result met expected result, for every step.  He was proud of the fact that his tool gave him all of his traceability and coverage metrics because of this.

 

For reasons I can't go into, I was unable to say "do you realise how dreadful this testing is?", my tongue was bleeding from having to bite so hard on it. 

 

The effectiveness of his testing was based on metrics coming from his tool.  I have never seen this to be true.  The effectiveness of testing has little or no correlation to how well the test management tool works - why can't people see this?

 

Where have you been, Mr Nairn?

Posted on Thu 5 Apr 2012 at 02:35

I noticed with shock that it is over 2 years since I last posted a blog entry.  Why?

 

It is interesting that I have not stopped writing, just stopped posting here.  I looked at what I have written and each of the potential entries are:-

- Not complete, either in thinking, writing or both

- Internal to the company

- Dreadful!

 

Partly my enthusiasm for posting stuff sort of went away due to a number of factors.  In recent months I have re-discovered my enthusiasm and have started using my twitter account.  I  think I will start blogging again. 

 

So, I will make this a short entry and hopefully get posting again more regularly.

 

One of the things that has got me more enthused recently (and I will probably blog on why my enthusiasm waned) was that I have been seeing more and more enthusiasm within my own test team in my new company and that has been a long hard slog.

 

I have been having some very interesting testing discussions with one tester in particular that has made me think hard and I always like to do that.  Some of the results of those discussions will make great blog entries!

 

It is only a one line fix

Posted on Mon 21 Dec 2009 at 07:08 in Testing
 



We had a problem report outstanding for a long time, simple little problem. When a user gets a message in their inbox, a yellow marker shows up on their screen to indicate they have a message waiting. Some messages are important messages that the user has to action and then they get a red marker on their screen. Some users get the important messages as a FYI, but were still getting the red marker when they can't action the message and in this circumstance they should receive neither a red or yellow marker. Low severity and low priority bug.

Fix comes in from Development. The fix is to remove the marker for these users, which is a simple one line fix.


We test it. It is a low severity/low priority bug amongst all the other bug fixes we have to retest, to do a lot of testing in this area requires a lot of data set-up, so we just do a quick check. All is well.


The release goes to Live.


Everyone reading this knows what is going to happen next, otherwise there would be no point to this post! Yes, something was broken. True the users who were supposed not get their marker didn't get their marker, however, no other users were getting any marker either and this marker is relied on for important messages, as the users need to take action - no action means the user is prevented from using the system. Not getting the marker meant some users were not actioning and getting locked out of the system.


Big “oops”.


OK, so all test groups make this type of error sooner or later and we learn from them (sort of, sometimes we don't). But the whole episode made me think about Michael Bolton's distinction between “checking” and “testing”. What we tend to do on low severity bug retests are “checks”. Heck, we have already tested this area once, lets just check the fix. We might do some regression testing, maybe through running an automated suite, but we rarely test a bug fix that is low severity.. Maybe other test groups do more testing on this type of bug fix, but mostly my group doesn't.


So, I asked myself, SHOULD we have done more testing? In order to answer that, I looked back at all the low severity bugs we had found, over 3000 of them. I looked at how many of these fixes had either failed retest and/or caused a problem in Live. Few had failed retest, less than 1 percent. I could only find one other low severity problem that had caused a problem in Live and that was also low severity. I also estimated what would it have taken to test these fixes rather than check them? A lot was the answer. The rate of return for putting testing effort into these bug fixes would have been incredibly low.


So, my conclusion was that, yes it was bad that the bug got through, but I don't want to start testing low severity bugs, it will cost too much effort that would be better targeted at more important areas finding more important bugs. I will just have to take the risk that another low severity bug fix will cause a problem in Live.


Testing is all about assessing risks and acting accordingly.



Problem with hiring testers Pt2

Posted on Fri 9 Oct 2009 at 07:09 in Test Management
 

We are in a recession, the unemployment rate is increasing, so why, oh why, am I finding that it is so difficult to find a good tester to hire?


I have some theories as to why there are so few people applying for my vacancy, but no real facts.


And the CVs I have received are, quite frankly, dreadful and those that aren't dreadful, when I interview it is clear that the CV lied. An example, one CV said that the person was experienced in SQL and in the interview the person could not describe how to do a simple SELECT statement! OK, SQL experience is not vital to be a good tester, but if you say you know some SQL, then please, please, be able to describe what you use SQL for and how to do simple operations.


Do applicants really think that the person interviewing them knows less than them and that they won't be caught out? I guess they must and it makes me wonder whether applicants do get through some interviews and start work doing a poor job? Probably yes, especially as these people are in work now.


It all got me thinking about the state of the testing world (again, I sometimes get thinking this way!) I am surprised at the lack of skill in testers who have 3,4,5 years “experience” and yet there are a lot of testers in the software world. I am baffled. Does this mean that there are a number of companies who have test teams made up of poorly skilled people? Does this mean that there a number of companies who are paying people relatively high salaries to do a bad job?


Now, maybe I am lucky, the company I work for values testing as an integral part of the development of software and is keen to employ good testers and every company I have ever worked for in a testing capacity has had the same view, so maybe I just haven't worked for a company where this isn't so and there are really test teams out there who are useless? Thinking about it, I did go for a job once with a company that clearly had no clue about testing – they wanted to hire me as a Test Manager and the IT Director interviewed me (very badly, as I recall) and it was clear that I would be a bit part player in the decision making process. I turned the job down. Maybe I should have taken it just to see how I could have changed things.


I know I have been accused of being picky in the past when hiring testers, and I would agree, I am picky – because I want someone who is skilled in the craft of testing or someone who has the potential to be skilled. Surely that should not be too much to ask?


This week I have, finally, filled the vacancy. Assuming he accepts the offer. It has taken me about 6 months to find him.


The problem with hiring testers

Posted on Thu 17 Sep 2009 at 06:52 in Test Management
 

Tony Bruce asked a question on Linkedin that I just had to blog about.


Tony was asking why job adverts are so specific in their requirements for testers instead of asking for a good tester.


I think this is a good question and one that I have been struggling with for years. Let me give an example of what has been happening to me recently when trying to recruit, so from the other side of how Tony was looking at it.


What I have been looking to recruit is someone who can test. Full stop, that is what I want. I don't care if they have a background in banking, health care, games, whatever. I don't care if they have used QTP, Selenium, Robot or any other tool. I don't care if they have been using Agile, Waterfall or any other methodology. I don't care if they have Unix, Windows, mainframe or any other knowledge. I WANT SOMEONE WHO CAN TEST!!


So, I go to HR and say that. I get blank looks at first, then I explain a bit further and the conversation goes something like this:


“Oh, you want a junior tester, then?”

“No, I want an experienced tester who knows how to test.”

“So what technologies do they need to have?”

“It doesn't matter, I want someone who can test”

“What tools do they need to have used?”

“It doesn't matter, I want someone who can test”

A pause ensues.......

“OK, so what qualities do you need?”

(Methinks: Ah, we are getting somewhere!)

So I explain that I want someone who has the ability to think creatively, be able to apply testing techniques to a testing problem, be a good bug finder, etc, etc.

“Oh, you want a junior tester, then?”

“Aaaarrrgggghhh”


I know what the real problem is with this conversation, it is that HR don't understand what testing is all about. They understand the need for technologies, tools, etc, that classifies the “type of person”. They can't understand the qualities required of a tester – or, I can't explain it well enough.


End result is that HR put the job spec that I have put together into their infernal machine that determines what the salary should be for this new hire and it comes out ridiculously low for an experienced tester, in fact it is at a junior tester level.


So, what do I do? I put together another job spec that has the requirements stated in terms of tools, technologies, methodologies, etc., re-submit that and lo and behold I get a salary level that looks about right.


The end result is that I have an advert out that states things I don't need as the only way I can get a decent salary level for the person.


That then perpetuates into the job agencies who filter out anyone who doesn't fit my “requirements” and, yes, I know I am potentially missing getting in someone who would be a good tester.


And then, when I interview people I am sure they wonder why I am not very interested in their skills that I ask for in the job advert.


Wish I knew how to resolve this problem....

Secrets of a Buccaneer Scholar - book review

Posted on Thu 3 Sep 2009 at 05:50 in Musings
 

I have just finished reading “Secrets of a Buccaneer Scholar”.


I ordered the book on May and it was delivered last Thursday. I put it onto my pile of unread and part read books and there it was destined to stay for a few weeks until I got round to it. Then something happened on Monday, which was a public holiday on the UK. I got sick. Not serious, just a bad cold. Unable to do the jobs I wanted to do in the garden, I looked around for something to do and I naturally gravitated to my pile of books. I had just started “how we test software at Microsoft”, but couldn’t get enthused about picking that up again. I almost started having another attempt at reading a book on how to speak Hindi, but that seemed too hard, and my book on Ballroom dancing techniques just requires too much concentration (I am learning to teach dancing). Buccaneer looked to be just about the right size, so I started it. I finished it in two chunks of time which is unusual for me as I often start a book and finish it weeks later (I am reading Weinberg on Writing: the Fieldstone method – which is a great book, but it is taking me time to get through it in small nibbles).


Getting back to Buccaneer. It is written by James Bach and is not a testing book, it is a book about how he learns things. He learns things differently to how I was taught to learn, he learns through his own method and it works for him. I was fascinated, partly because it works for him but also because some of the things he does and says struck a chord with me. For example, there is a great story in the book about how clams helped him to write more of the book (read the book, it will make sense!) because of his use of procrastination. I procrastinate too, I start writing/doing something and then stop, do something else and come back to the first task and do it better and/or finish it. But, I always feel guilty about the procrastination - “Finish what you started” was a mantra drilled into me from an early age. James does it almost by design and maybe in the future I won't feel so guilty.


The book has lots of stories in it from James' life and he uses the stories to good effect to make his points. He also uses mnemonics, SACKED SCOWS will remain with me now as something to work through. He uses heuristics which make sense, although I can see that they will require practice to make them work for me.


The very personal nature of some of the stories made the book come alive for me and it is easy to see how the messages can be used in a practical way.


Having read some of Jerry Weinberg's books, I can see his influence a great deal in James' book and that is not a bad thing nor is it surprising when you know that Jerry is James' mentor.


Not all of the tips and techniques in the book will work for me, I believe, but I am going to give at least some of them a try. I need to think about the book for a little while and then I will re-read it.


In summary, I loved the book and would heartily recommend it to anyone who is passionate about learning.


Well done, James, and thanks for a) taking my mind of feeling ill and b) giving me something I can use – I can't think of higher praise for a book than saying I will use what is in it.

I got a new Netbook!

Posted on Thu 3 Sep 2009 at 05:46 in Musings
 

I have just bought myself a new Netbook computer! A Samsung N120 if anyone is interested. Why? Well, I do a lot of travelling and a laptop is just too heavy, the battery life is just insufficient and it is too big to use on your lap on the train. And, I just don't seem to have to time to write for fun (as I am doing now) when I am at work or at home. This little beauty is just right, up to 11 hours battery life (or so Which? said) and nice and light for carting about.


Like any good tester, I didn't bother reading the manual before starting it up, I just booted it and made a start. Keyboard nice and responsive, touchpad really nice and has a scroll ability, like a mouse wheel (never seen that before on a touchpad).


All good so far, connection to my wireless network at home was simple, as it should be, set up user account for my wife and I, configured Mcafee, etc, etc. I then downloaded OpenOffice. I had used it before, but only briefly, and decided to try it out as I had heard some good things about it. It installed quickly, easily and I was soon able to write my first document – this one. I have to say, I am impressed, especially for something that is free. I have played with Calc, Draw and Base and will play, sorry, work with the others.


One major gripe with my purchase. I bought the machine based on the Which? report on netbooks and that is fine. I also bought mobile broadband, also based on a Which? report. I bought this from British Telecom and every day now I receive an email from them saying that delivery has slipped another day. I am starting to get fed up and considering going to another supplier.


Well, not really a testing blog entry, but I had to start somewhere. Hopefully, this means I will be able to write a bit more often into my blog now!


Oh yes, and the machine has a Webcam! No use whatsoever for me!

 

I did a bad thing last week

Posted on Thu 25 Jun 2009 at 08:11 in Testing

Last week I attended the BCS SIGIST (no, that isn’t the bad thing!).  As usual, the day was well spent and enjoyable – Michael Bolton was excellent, as expected.

 

One of the presenters was Lloyd Roden.  Lloyd is a very good speaker, I have heard  speak him a few times and I like to listen to him.  Lloyd belongs to Grove Consultants and I have enormous respect for Grove and the people in it, having recommended them to a number of people and I will continue to recommend them.  His presentation this time was on his top ten controversies in Software testing and it was very entertaining, Lloyd is good at that.  I agreed with his view on 8.5 of his top ten.  The one I had a major disagreement was on certifications.  Lloyds view is that they are useful, I disagree.  That is not a bad thing that I disagree, however, I became incensed by his argument, interrupted rudely and argued with him whilst he was doing the presentation – that was the bad thing.  I should have talked to him off-line, not interrupted and argued during his presentation, especially not argued with him whilst annoyed, as I was.  Lloyd did a very good job of shutting me up politely, well done to him.

 

I have emailed Lloyd with an apology, and let this be a public apology.

 

What got me annoyed?

 

Lloyd argued that other professions use certifications, and he gave examples, therefore we should too.  He also said that ISEB/ISQTB are good because they give common terminology.  There are fallacies in this argument, as I see it, as follows:

 

Other professions do use certifications, it is true.  There is a big difference, however, between the examples given and what we have in ISEB.  The big difference is that professional certifications, like CORGI (now the Gas Safe Register), which was one of Lloyd’s examples, require the person who is certified to not only undergo training, but to show their competency.  They can also have their work inspected to show that they do good, safe, work.  From my own life, as I have mentioned before in this blog, I enjoy Ballroom and Latin American dancing (see http://www.sqablogs.com/petenairn/783/Dancing+is+like+Testing.html).  I am currently training to be a Dance Teacher.  I will (hopefully) get certified and to get that certification, I need to know the theory but also to demonstrate that I can actually dance as part of getting my certification.  I cannot even study to become a teacher until I have passed other tests that show I can dance to a reasonable level.  These are measures of competency and knowledge

 

What do you have to do with ISEB?  Turn up to a testing station, answer some multiple choice questions and hey presto, you are a certified tester!  Yes, you can go on a training course and even if you don’t then you will have to do some self study.  But, you do not have to show you can test.  This certification is, therefore, a measure of knowledge, not competency.  See my previous entry for where the two can become confused http://www.sqablogs.com/petenairn/2296/Certifications+don%26%2339%3Bt+give+competency.html

 

To be fair to Lloyd, he made it clear that you can be a good tester with no certification and you can be a bad tester and have certification, which I agree with.  So, I have to ask, what is the value of having the certification in that case?  I find it difficult to respect a certificate if someone who has never tested software can pass the exam – and I personally know of two people who have done just that.  If I know of two people, how many other people have also passed the exam who have never tested? How is that certificate valuable? 

 

Is there a way of measuring the competency of a tester by the use of an independent body?  This is where it is very difficult for a national or international certificate to be devised.  I don’t have an answer to it, except to say that when I was at IBM, we had an internal certification program whereby you had to show competency before getting that certificate.  It worked reasonably well, I was on the board that examined testers request for certification and we were very stringent in making sure we understood whether the person was competent enough to get that certificate.  Certification wasn’t done by the use of a test, but by examining the actual work that the tester had done on one or more projects.  That has its flaws too and I am not convinced in my own mind that the flaws outweighed the benefits, but it is a better measure than only sitting a test.

 

The real problem with the certification of testers, is the view that companies and recruiters take with respect to the certificate.  I see a number of job adverts with certification as a requirement to even apply.  This tells me that that the industry ascribes more weight to the certificate than it deserves. 

 

Being able to apply for jobs is the only reason I got the certificate! Does that make me a hypocrite?  Possibly, probably, but feeding my family is more important than principles to me.  I wonder how many other testers took the test for the same reason?  Is that a good reason for having a certificate? It appears to me to be the only valid reason I can find, but it isn’t a good reason.

 

Turning to the point about common terminology, which is another argument for doing certification, this is also fallacious.  Common terms exist, but what people mean by them is different in the context of the company, project, test manager involved, culture, etc, etc.  I don’t believe we will ever all agree on what is “common” because it always varies, even if slightly, depending on what you are doing at the time.  In some professions common terminology is vital, e.g. if one surgeon calls an organ a “heart”, it is useful if the assistants on the operating table know what is meant.  If we were all doing the same project at the same time, then common terminology would be important, but software development isn’t like that.

 

One other point I would like to make is that there is a difference between getting the certification and getting the training.  I have heard the argument that certifications are a good way for consultancies and training companies to make money out of training courses.  This may be true for some organisations where they only teach people to pass the exam.  Some organisations, however, provide education on testing and also teach people how to pass the exam.  The former are doing the industry no good, the latter are definitely performing a good service.  I put Grove in the latter category – I have not been on one of their training courses, but people I know and respect have and from what they tell me and from the course materials I have seen, the courses are good.  And, no, I do not have any affiliation to Grove whatsoever!

 

The arguments both for and against certifications can get emotional (as I did at SIGIST); I suspect the arguments will continue for some time and there will not be a successful conclusion for either viewpoint.

 

Last point from Lloyd’s presentation was the other 0.5 I disagreed with.  He stated that Test Managers should set aside time each week to test.  I think this is something we managers should do, but only if it makes sense on the project.  When I have over 50 testers to manage, my time is better spent on managing than doing.  By “better” I mean better for the project, better for the company, better for me and better for the testers – I am paid to manage, not to execute tests.  I will expand on this in a future blog entry.

 

So, once again, sorry, Lloyd, for my behaviour – I won’t do it again. Shall we agree to disagree?

Certifications don't give competency

Posted on Wed 3 Jun 2009 at 09:50 in Testing

There are a lot of people arguing for and against certifications for testing.  Personally, I don’t like certifications as I don’t believe they tell me anything about a tester’s ability to do the job.  Here is an example of where certifications can give a tester a false sense of security.

 

My customer has sent all of the User Acceptance Team onto the ISEB foundation course and they have all taken the exam.  Some have passed with no problem and some have failed, a couple more than once.

 

I was doing some fairly basic testing with one of the UAT testers the other day.  We were testing out a change to some data rules whereby the data was shown to the user, or not, dependent on dates and class of data.  I drove the system for a while and then he did.  When he was testing, I noticed that he was using dates that were way earlier or later than the boundary at which the rule kicked in.  I asked him why he had chosen those dates and he said that he was using Equivalence Partitioning such that any date would do.  OK (ish) except that the rule was very boundary dependent and the better test would be to chose dates on, just before and just after the boundary.  He also was testing every class of the data.  Again, I asked why and he said he needed to check every one.  Here was a good case of where equivalence partitioning made sense as the groups of classes behaved the same way.  I had some difficulty in persuading him that different values would be a more effective way of testing the changes as he had convinced himself in his own mind that what he was doing was right – after all, he had been on the course and had been trained in how to use these techniques, had passed the exam and therefore he knew what he was doing.

 

So, although the tester had understood the concepts of Boundary Value Analysis and Equivalence Partitioning he did not understand the applicability in a real world situation. 

 

It confirmed, in my mind, that the value of doing the training and taking the certification was low.

 

This is the problem with a knowledge-based exam/certification, it has no bearing on the competency of the person who has taken the exam.  Competency can be trained, but not by the use of certification training.

Embarrassing moment...

Posted on Tue 28 Apr 2009 at 11:07 in Test Management

One of the things I try to impress upon my testers is that when testing it is important  not to “prove” that the system works correctly, but to find the weaknesses, faults, bugs.  If a tester tests looking for the system to do what it is supposed to do, then that is what they will find – nothing (or very little), whereas if they are testing to find faults they are more likely to find them.

 

It is too easy to think “that is what the specification says, therefore, that is what I will test against”.

 

I fell into that trap this week.  We have a lot of faults that have come back from Development and we have a short timeframe to re-test them. All of the faults are allocated out to team members to retest.  One of my testers needed to complete some other work before starting on the re-tests and so I agreed that work needed finishing first, then I said “And then you can close off those faults”.  After a short pause, he said “Don’t you mean that I should find the faults in the fixes?”.  It was said in a jokey sort of way, but it was a good point!

 

Two things hit me.  First, of course that is what I should have said and it was embarrassing that I had fallen into the trap.  Secondly, I was very pleased that the tester had pulled me up on what I said, it means I am getting the mindset of the testers better.

 

Isn’t it odd though how even seasoned professionals like me can make that type of mistake?  

 

Some FACTS about testing

Posted on Thu 9 Apr 2009 at 04:42 in Stories

FACTS:

- The customer is always right

- Automated testing always gives the right answer

- I don’t believe in coincidence

 

Maybe these facts aren’t correct?

 

Here is a cautionary tale of something that happened to us over the last two weeks.

 

We have a process with our customer in order to update some reference data on the database periodically.  This data gets updated maybe a four or five time a year, data is amended or added to (never deleted).  This is their data, spe******ed information that, quite frankly, we don’t claim to understand nor do we need to (hold that thought!). Over the years we have made this a pretty slick process; the customer provides a spreadsheet with the additions and the amendments which gets folded into the data on the database.  We, the test team, have an automated script which compares what the customer wanted with what we have on the database.  The automated script takes minutes to run (for those interested, this is a VB script in Excel).  If all matches, we sign it off.  Up until last week, we have had no problems with this in over 4 years.

 

Last week, we found a discrepancy when the automated script was run.  Two values did not match.  No big deal, a bug report was written and the database team corrected the data in the database, we closed off the bug report as the automated script now said that the two matched.

 

End of story…..

 

Well, not exactly.  The changes went into the UAT environment, the users started using it and the system started behaving very strangely in a key application.  By coincidence, there was a fault raised in Live at the same time on this same area.  Aha!  Duplicate fault from UAT in Live – not a problem, not too serious, don’t panic chaps!  Upon investigation, the Live fault was found as being a user error so that’s OK then….

 

Well, not exactly.  To their credit, the UAT team stuck to their guns and insisted that the problem in their environment was analysed.  We sighed, grumbled about picky customer, you know the sort of thing, but we started investigation.  It quickly became apparent that the problem that the automated script had seen was not a problem in the database at all, but a problem with the spreadsheet that the customer had given us.  The database had, therefore, been incorrectly “fixed” to be equally incorrect.  Causing the problems in the UAT environment.  Also, this showed that the problem was not the same as in Live.

 

No problem!  Uncorrect the database and everything will be fine…

 

Well, not exactly.  The change was backed out of the database and the UAT environment was still behaving strangely.  There was much scratching of heads, tut-tutting, sage experts poring over complicated SQL and even more complicated COBOL programs.  After some considerable, stressful, hours, the problem was found.  The misbehaviour due to the incorrect values in the database had caused flags to be reset so that when the correct values were restored, these flags were still set incorrectly.  We corrected these and everything was OK – with 5 minutes to spare before the customer pulled the plug on an important release.

 

Debunking the facts!

 

- The customer is always right.  Not really, the customer can make mistakes too.  The real fault is believing that the customer has NOT made a mistake and jumping to the conclusion that the system is at fault.  More thought required in diagnosis.

 

- Automated testing always gives the right answer.  I never, ever, believed this one.  Automation without thinking about the problem is not testing.  It is, however, very easy to get complacent about an automated test that keeps giving you answers that look correct (discrepancy or no discrepancy).  We got complacent about this process of updating data.  We won’t make that mistake again AND we need to understand more about what this data does so that we don’t rely on UAT to determine whether it is right or not!

 

- I don’t believe in coincidence.  Here, there was a real coincidence that threw us off the scent for some time.  Coincidences DO happen!

 

Some interesting lessons learned (or, maybe, re-learned).

 

 

King Midas' ears - a rant

Posted on Tue 11 Nov 2008 at 01:08 in Test Management

You know sometimes you just need to rant.  I was reminded of King Midas’ barber and Midas’ donkey ears today

From Wikipedia:

“Once Pan had the audacity to compare his music with that of Apollo, and to challenge Apollo, the god of the lyre, to a trial of skill (also see Marsyas). Tmolus, the mountain-god, was chosen as umpire. Pan blew on his pipes, and with his rustic melody gave great satisfaction to himself and his faithful follower, Midas, who happened to be present.

Then Apollo struck the strings of his lyre. Tmolus at once awarded the victory to Apollo, and all but Midas agreed with the judgment. He dissented, and questioned the justice of the award.

Apollo would not suffer such a depraved pair of ears any longer, and caused them to become the ears of a donkey.[15] The myth is illustrated by two paintings "Apollo and Marsyas" by Palma il Giovane (1544-1628), one depicting the scene before, and one after the punishment.

Midas was mortified at this mishap. He attempted to hide his misfortune with an ample turban or headdress. But his hairdresser of course knew the secret. He was told not to mention it. He could not keep the secret; so he went out into the meadow, dug a hole in the ground, whispered the story into it, and covered the hole up. A thick bed of reeds sprang up in the meadow, and began whispering the story and saying "King Midas has a donkey's ears."

Today I feel like King Midas’ barber – I can’t keep this in any longer.

 

Something that really makes me cross is when so-called industry experts believe that they know better than you how to solve your problems.  You see it in response to questions in forums, you see it in people’s blogs, you hear it at conferences and you hear it in conversations.

 

An example:  I was at a conference and in the evening got into a conversation with one of the speakers.  This speaker (who I will not name) is well known in the testing world, has written articles and books, teaches and speaks and is well respected.  We talked a bit about generalities of testing and then got talking about what I was doing and his only retort was “I’ll give you my card, I would sort it out”.  How could he possibly know he could sort it out?  Why did he think he could do a better job than I could?  He didn’t know what the context was, the constraints or anything about the project.  The arrogance just stunned me and I couldn’t think of a suitable response. 

 

My particular problems are split into a number of categories:

1)      Those that are imposed as constraints on me from higher up.  Things like the project’s budget, company policy.  I can’t solve those, I have to live with them and make the best of them.

2)      Those that are caused by the project methodology.  The project uses a Prince 2-like methodology, which means that there are some things you just have to do.  Whether you think Prince 2 is good, bad or indifferent is irrelevant, as with any methodology, problems are caused by it.  Some I can work around, some I can’t.

3)      Those that are caused by people working on the project but not in the test team.  They have their own priorities, ways of working, constraints, etc.  Some of those may cause me problems.  Some I can resolve, some I can’t.

4)      Those that are caused by people within the test team.  These are totally within my control and I can work towards resolving them (e.g. skills gaps).

5)      Those that are caused by me.  Much as it pains me to admit it, I am not perfect and I cause some of my own problems.  I try to work towards self-improvement, but perfection is probably about 300 years away.

 

I am sure that I am not the only one with this set of problems.  I am glad there are problems; otherwise I would be out of a job!

 

I don’t believe that any one person can solve all of these problems for me.  I most definitely believe that anyone could come into the project with no knowledge of its history, culture, methods and sort out my problems.  If I had a free hand, had a lot of time and a lot of money I might, just might, be able to solve some of the problems this project has, but that is not going to happen.

 

I ask for help, advice, like most people.  I am not afraid to listen to people who have tried things and found them to work.  I have tried some of these methods, some have worked, some haven’t, but no-one, I repeat, no-one, could tell me how to sort out the problems I have.  

 

So my plea to the experts is “Don’t tell me you would know how to solve a problem, give me pointers to how I can solve the problem”

 

Finally, here is a counter-example to the one I gave above.

 

I spoke to Johanna Rothman at a conference and asked her a question which was related to the talk she had just given.  She did not give me the answer, nor tell me how to solve it.  What she did was ask me questions about the problem and made a couple of suggestions of how I might approach solving the problem.  It was wonderful, I was able to take that away and work on solving the problem myself.

 

Rant over. The reeds have finished their story and can now die.


Last Page | Page 1 of 3 | Next Page

RSS feed

- Subscribe

Comments?

email me at pete dot nairn at btinternet.com