Friday, October 21, 2016

Move to Wordpress

Hello!

If you're the sort of person who enjoys peaceful silence then you can continue to follow my Cage-esque experiment in self-publishing over at https://secondsignofmadness.wordpress.com/.

Monday, February 22, 2016

A Partial Defence of "Apathy"

Hello!

I have had one reply that I know of to my Apathy post containing useful critique. You can find it here written by a name you may know, Matt Heusser! First I have to say that I'm grateful to have a considered response. Now I'll expand a little on the points raised, and hopefully draw an interesting conclusion.

When you critique someone in public, there is an audience. That observer effect, to borrow a phrase, changes the nature of the conversation.

This is a useful insight, and it shows how dangerous my post could be. I don't know who's reading, and how they might employ the idea that they should shrug off apathy and argue the point - maybe I'm empowering people with immoral aims or ones who misunderstood my intent! I didn't expect anyone to question me on it. Now they have, and I have to pose some clarifications and try to be honest with what I learned and where I was wrong.

Sometimes, it can be tempting to play to the audience. If you think you’re talking to the other person, and they are playing to the audience, you’ll see a bunch of bizarre behaviors that don’t seem to make sense and you won’t be able to figure out from your position.

Wait, did I just play to the audience? Of course I did! But why? It's quite simple: I want to express and emphasise the idea that critique can be constructive for both parties and doesn't have to be a social nightmare or just to further personal ends. We can place the pursuit of further learning and conversational exploration above our need to seem clever... but it does depend on a hidden intent. You have to believe that my intent is to make us more honest with things that matter in our industry, although if you were following my advice you'd be slightly suspicious of my intent and make up your own mind.

So how do I salvage the point? There's not much I can do about intent. What I can do is say that with a critical mind you'll always be better poised to find a more objective truth than without criticism. Just look at scientific progress. I can also say that the critical tools you use in discussions about a subject are ones you can use in testing software.


Instead of “yes, but…” say “yes, and.” Add, ways to do things, or think about that. “Here’s what has worked for me”. Consider the environment.

This is another critical point, the partial answer to which I hid in a footnote. People are emotional creatures and no amount of intent to find the truth will help you convince someone who thinks that your intention is to attack them instead of seeking a better idea or the limitations of one. Of course some people abuse this and deliberately overreact with anger or hurt feelings to make you look cruel, which can be a smokescreen to hide their bad ideas. It's not as simple as I made it seem, far from it.

That's not to say I regret typing it, though. I'd rather people asked the questions (nicely, generally speaking) and discovered what the reaction is. I'd rather people reading publicly-posted advice had the opportunity to look deeper into the subject. If the person posting the advice refuses to defend the advice (no matter the reason) then we can make a useful judgement about how valid that advice might be, although not necessarily how invalid it might be. If the person posting the advice cannot defend the advice then we can make a similar judgement. It's not that the advice is intrinsically bad. Even if you're right that doesn't necessarily make their point wrong, and making them seem wrong also doesn't make their point wrong. But it does mean that you, and anyone else reading it, is subject to a reminder that they should question what they are reading. Maybe someone else could take up their cause.. happens to me all the time and I'm often glad of it.


Consider getting to know the speaker. Find out if feedback is warranted. Ask if feedback is wanted. Ask for the best way to give feedback. [...] Sometimes, the right choice is to say nothing.

If we're talking about feedback this is great advice. If we're talking about tackling bad ideas and keeping us honest it's... a reasonable heuristic. Part of my intent is to get the professional testing community to tactfully request reasons. In most scenarios it doesn't have to look like feedback and it doesn't have to look like critique. When someone says "I had a great time at the testing meetup" you can say "sounds great! Did you learn anything good?". When someone says "I thought that lecture on testing was really good" you can say "sounds like you enjoyed it! Do you think it will change the way you test or think about testing?". When someone says "Bob's a really awful tester" you can say "what did you see that makes you think that?".

There are a lot of little cognitive tools and heuristics we can use to do this (I'm writing something on
some of them now). That last one I used is called the data question [1].

That's a great start. It's positive. It can often be just asking for clarification so that you're on better ground to ask more questions, even if those questions are of yourself! We must create somewhat safe spaces in which to find better solutions, whilst not, in our friendly way, opening ourselves up to nonsense and 


If the idea is bad, if it is actively doing harm to the community, let’s separate that from “you are a bad person.” Character attacks might not be fit for public consumption, but you can attack bad ideas, often by the consequences of those ideas.

Please, let's! Character attacks aren't usually very useful to prove a point.. unless the point you're arguing is about their character.  Making character attacks to prove a point not connected to a character is not going to help you express the idea that you're a professional skeptic (to other professional skeptics) because every professional skeptic should know about the misuse of ad hominem attacks.


Do it too much, though, and things can change; you’ll find your reputation is built on criticism. That can become a very dangerous business. Better to be known for what you stand for.

It's true, it can get to be excessive. It's also very tiring, and can be emotionally challenging, and we need to know that it's probably the same or possibly worse for others. It's possible to lose sight of the learning and focus on making an argument. If you've ever seen two people argue who don't listen to each other you'll have seen how it can get. If you've ever seen an argument where one or more parties have no intention of ever changing their opinion you'll know how pointless it seems. But it does show something important about that party - if they aren't willing to defend their idea or consider that they might be wrong or be open to learning something new then how reliable are they with such matters? That doesn't mean that they're bad people, of course, just that they might be a marketer or salesperson rather than a professional skeptic. Perhaps it's not their job to seek truth in confusion. We can judge the value of their advice for ourselves when we know more about then.

However, I do want to be known as a professional skeptic. I also want to be part of a community that keeps me to account if I say something confusing or disagreeable, so I want to be known as someone who is willing to critique and be critiqued, preferably in a somewhat humanitarian and professional way.

I'd like to invoke the words of someone else to help me here.




Thanks Kino. If I had to critque, I might say the post was a little naive. That’s cool tho, man.

This is useful to mention, because it was naive. I did not construct a particularly well-formed argument in my post. I did not expand on the details or predict possible important counterarguments. I think my post was important, and served a message, and promoted thought, but it didn't question itself and it didn't urge caution or handle its own weaknesses. Moreover, I knew that I was doing it, which is why I put any defensive detail in a footnote. Mr Heusser spotted this and called me out on it.

Now look where we are. How did I find these additional insights? How is it that we dug deeper on this issue? Because one (of a very select few) took it upon themselves to shrug off the apathy and question me. Because of that we've found some of the limitations of my heuristics. I've had to go deeper in my explanations. I've had to express more of myself for you to make a judgement about the nature of my intent. I've had to agree with weaknesses in my over-generalised argument, concerning caution and context.

See how good that is? If you're a professional tester you do this sort of thing as a job with software already! Now we can apply some of the same rules with each other.

Carefully.


PS. Many thanks to Matt Heusser for both taking my advice, and improving it at the same time.

[1] Gause, D. and Weinberg, G. (1989). Exploring requirements. New York: Dorset House.

Thursday, January 28, 2016

Apathy

How do you keep up to date with the test industry?

It's a question you've heard before at job interviews. You may even have a stock reply. You read books (you have one in mind in case they ask you what you last read) but you can't really remember the content, just the concept. You attend conferences, and while you had a great time at the after-party you forgot everything you heard because the content pandered to your emotions and to what you already believe anyway. You go to meetups and you have a simply lovely time agreeing with each other for two hours over a few drinks.

Can you defend this? Can you say that you keep yourself fresh and in-touch and up-to-date and you live in a state of constant self-improvement for these reasons?

We are a core of craftspeople struggling to improve ourselves and the world around us, we question everything, including ourselves, but it seems perfectly okay to state things like "I enjoyed that talk" without being asked to defend it. When did we get into a state where we refuse to question the value of a talk or book or article? It's very nice for us to have an opinion, but talks and books and articles absorb our time and attention and they should have some value, particularly and especially if we're going to recommend them to someone else.

So next time you hear someone tell you that they enjoyed a talk ask them why it was enjoyable and how it improved them or their testing. If we keep each other honest maybe we'll create a culture of self-questioning and improvement. If you're just interested in getting out of the office and enjoying the after-parties then could I recommend a holiday instead?

Moreover if you give out advice in public, especially if you're at a paid conference, be ready to defend it. If you're reading advice, especially if you've paid to hear it, why not question what you're hearing? Preferably publicly, so everyone can benefit from your question. If someone came for an interview and told you that they're awesome would you hire them at once, or probe for evidence? You need to ask questions. Even if you don't believe in your question you owe it to yourself, to them, and to the testing industry to pose that question - and if you're being questioned you need to understand that the questioning is for your benefit, as well as everyone else's.

We've all had a quick rant at certification or factory testing or misunderstanding of testing in our careers, so let's keep ourselves to a higher standard. Let's practice what we keep telling everyone we do and question things, including anyone who claims to speak with authority on subjects that matter. Let's not sit in the Church of Testing while we hear the good word from the pulpit, let's sit in the lecture hall after the presentation of data at a science conference and strongly question and debate what we're told in search of something better. Then let's question the questioners. And any reasonable target* of questioning should not hide behind outrage or social norms, but understand that we're all just searching for what's best.

Apathy is the dark shadows where confidence tricksters hide their lies. It's the cover of night that lets nonsense dance and play in the piazza of our industry. I now wonder if we can ever turn the lights on.


*Reasonable targets include anyone who speaks or writes with apparent authority or even strong opinion on a subject, including in response to anyone who speaks or writes with apparent authority or even strong opinion on a subject. The way that person should be questioned will depend on who they are. An industry leader should probably be more resilient than a testing newbie. Special attention should be given to anyone selling something. I'm giving the advice "be human, be an adult, have tact and try to communicate"; mainly to stem the flow of questions and comments on how we should all be nice to each other in case people become frightened to be involved.

Wednesday, July 1, 2015

Testers, Developers and Coders

What's the difference between a tester and a developer?

I originally looked at testing and development and the people that do it, like this:

I then migrated to believe in an overlap. Testers can sometimes develop, and developers can sometimes test.

Then came an important realisation. That testers are developers, insofar as they are part of the development of the software. Hopefully a well-used and important part.


However, there's a secret truth about this diagram. The strange thing about testing is that it doesn't require any particular skill, it just requires some sense of purpose. The desire to find things out about the product. When a coder compiles their code and sees a compiler error they've arguably just performed some testing. They have learned something about the product and then they can use the information to act on it. When they check their own work locally to see if it seems to behave sensibly before they put it in a build they have done testing. Informal testing, exploring the product for problems. This necessarily happens, it can't be avoided, even if they try! So where does that leave us, if coders are testers? Well, in some places they've decided that if testing can be done by anyone (which is pretty much true), then why not fire all of the testers and re-name the "developers"? Test is dead! Except that it's not, it's just being done by someone else.

Consider coders doing testing like bleach. It's potentially dangerous, but can be used for good. Now consider a view of testing where "testing can be automated", and "acceptance criteria" are a suitable substitute for actual requirements and non-shallow testing, as an acid that, if brought close to a sensitive surface like software development, can eat away at it. Now if we mix the bleach of tester-coders with the acid of testing ignorance we get a cloud of chlorine gas, choking the life out of coders, testers, software users, and anyone else who comes close enough. It's not likely to kill anyone in small enough doses, which is why it's possible to carry on with fake or bad testing especially in a big company where there's enough room for the gas to dissipate a little.



That's where the test expert lives. We are wardens of the health and safety of the people who design, build and use software. We ensure that the bleach is used responsibly in a well-ventilated area, and we keep dangerous acids locked away in a cupboard. We do this by not just exclusively testing, or concentrating on testing, but being skilled and tenacious in our pursuit of the truth and making it palatable enough to be realised by those blinded by their fantasies about their software. Don't strive to have testing achieved, strive to achieve good testing.

Tuesday, May 12, 2015

Improving Your Test Language - Automation

We think in language, and we communicate using language. Well, we perform a set of string translations that give the affordance of some meaning to someone else. I often look to Michael Bolton for clarity in the way we speak and write about testing, and his recent TestBash video is directly on the subject. I thought, as I hadn't updated my blog in months, that I'd post a little about the disjointed way some people in the industry talk and write about testing and how you might avoid doing the same - or if you want to continue doing the same understand the pitfalls of the language you're using and what you might be communicating to other people. A fair warning: this won't be news to many of you.

Automation

There is no automated testing. Automated / Manual Testing is a long-standing and extremely adhesive set of terms, but they should still be treated with due care. Testing is a human performance and cannot (currently) be automated. Manual Testing really just means "testing with your hands". In terms of software testing it cannot mean "testing without tools", in the sense that tools are required to interact with the product. One might describe hands and eyes as tools, or the computer itself, the screen, the hardware, the operating system, peripherals, the browser, and so on. These are tools that help you (enable you) to test the software.

I think that to most people automation means having the computer do a check and report the findings (automatic checking). At some point someone needs to design the checks and the reports. Also at some point someone has to read the results, interpret them, and assign them meaning and significance. These are points of access for human interaction with the tools being used in an example of tool-assisted testing.

That point is important - tool-assisted testing. All software testing is tool-assisted to some degree - and when one realises this it no longer splits testing neatly into two boxes where one is manual testing (executing test cases and some of "The Exploratory Testing") and the other is automated testing (a suite of automatic checks). Unfortunately it's a bit more complex and complicated than that. We need to look at testing as humans trying to find information. From that starting point we introduce tools because their benefits outweigh their costs. Well-written and maintainable automatic check tools can be really useful, but we must remember their costs - not just in terms of up-front financial costs, time costs, opportunity costs, maintenance costs, training costs and so on, but because a tool is an abstraction layer between the user and the software that introduces abstraction leaks. Automation (automatic checking) does not "notice" things it hasn't been strictly programmed to notice. Nor does it use the same interface to the software as a human does (although hopefully it's similar in ways that matter). Nor can it find a problem and deep-dive - meaning that it cannot interpret a false positive or false negative result. Nor can it assess risk and make meaningful decisions. Nor much of anything a tester does. Nor can it do much that a tester cannot do with "manual tools" (tools that aren't automatic check execution systems) given enough time. An automatic check suite "passing" just means that the coded checks didn't report a failure, not that suitable coverage was achieved.

This should empower you. It breaks a dichotomy of monolithic terms into a rainbow of possibilities. This should give you free reign to consider tools in your testing to make your testing more powerful, more fun, more efficient, even make testing possible... at a cost that you've thought about. Some of these costs can be reduced in the same way that testing itself can be improved - modularity and simpleness. Choosing simple, small or established tools like PerlClip, LICEcap, logs and log alerting systems, and one of my favourites: Excel can give you flexibility and power. You can pick the tools that suit your context - your favourite tools for your testing will differ greatly to mine, and for each time you test. I might use Fiddler today, Wireshark tomorrow, and Excel the next.

You don't have to choose between "Automated Testing" and "Manual Testing". They don't really exist. Put yourself in charge of your own testing and leverage the tools that get you what you want. Remember, though, that the tools you've chosen to help your processes will affect the processes you choose.

Monday, February 9, 2015

The Paucity of Pass/Fail

Pass and Fail seem to be ubiquitous terms in the testing industry. "Did the test pass?" and "How many failures?" seem to be innocent questions. But what, exactly, does pass or fail mean?

What does "pass" mean to you?



Think about that question for a second, before you move on.

Pass/Fail Reports

Checks

Let's say we're talking about a check done by a tool (sometimes called "automation"). We look at the results of that check, and it reports "PASS". What does that mean?

Let's look at this:


That's the result of a custom computer program I wrote for this automation example.

I think most people would be forgiven for thinking that this means that there is a test that's been coded that checks the login page to with a wrong password to ensure that it doesn't let the user into the system, and it passed. Most people would be forgiven for then thinking "well, we've testing logging in with an incorrect password".

If you (yes you) are a tester, then you are not the "most people" I was referring to. You should be highly suspicious of this.

What does pass mean to you now?


Tests

Let's look at this:

Tester Test Result
Chris Logging in with the wrong password PASS

This is a report given by me, a tester, for the sake of this example. I'll tell you that a human wrote all of this. Look at that PASS. Surely anything that green couldn't be wrong? If you were most people, you could be forgiven for thinking that the "logging in with the wrong password" test has passed. The coverage has been achieved. The testing has been done.

If you (yes, still you) are a tester, then you are not most people, and you should be highly suspicious of this.


So What?

Okay, testers, let's focus on those passes.

Checks


What does pass mean to you now?

To most people "pass" often means "as far as that test is concerned, everything's okay". Sometimes it means "that's been fully tested and is okay". Often it means "we don't have to worry about that thing any more".

But let's take a closer look. That automated test suite I showed you, the one with one test in it? It says that the "Invalid Login - Wrong Password" test passed. But it's not a full test, it's a check. What is it actually doing? I mean what does the "Invalid Login - Wrong Password" test do?

Well we know what it doesn't do. It doesn't investigate problems, evaluate risk, take context into consideration, interpret implicature, or do anything except what it was told to do. Maybe we investigate further and find out from the developer that what it does is enter an invalid login by having the computer enter "test1" into the username field and "password123" (which isn't test1's password) into the password field. Let's say that if, after clicking the "Login" button, the "Invalid Password" text appears, then it reports a pass, otherwise it reports a fail.

What does pass mean to you now?

Well, the explanation of the code means that the code that checked it returned a particular value (PASS) based on a specific check of a specific fact at a specific time on a specific platform, on this occasion.

Can we still have that happy feeling of "coverage" or "not having to worry" or "the testing being done"? Well, of course not. Here are some other things that would cause the code to return a "PASS" value and invalidate the test:

  • The test data is wrong, or failed to load properly, and doesn't include a test1 user at all
  • The "Invalid Password" text ALWAYS appears for any password
  • The "Invalid Password" text appears for every first attempted login
  • The text "Invalid Password" is hidden on the screen, but the checking system finds it in the DOM and reports it as found
  • The text that appears for a valid password entry has been copy-pasted and is also "Invalid Password"
  • The text "Invalid Password" appears elsewhere on the page after a valid login

These are scenarios that are isomorphic to the check's observed behaviour. That is to say that what the check "sees" appears the same in all of these cases, meaning that the check doesn't actually check for a wrong password login, it only checks for specific text after a specific event on a system with an unknown platform and data.

This means that for all of these cases the check reported a pass and there's a serious problem with the functionality.

We might say think because a computer said "pass" there is no problem. However, it might be that there are problems the check is not coded to do, or there may be a problem that the check does not describe because it's badly written, or something unexpected happened.

What does pass mean to you now?

Here's the actual real Ruby code I wrote for this automation example:

1
2
3
4
5
6
puts "Running Test Suite..."
puts ""
puts "Test: 'Invalid Login - Wrong Password' => PASS"
puts ""
puts "1/1 Tests run. 1 passes, 0 failures."
puts "\n\n\n\n"

What does pass mean to you now?


Tests

Okay let's move on to that tester's report. A tester did the testing this time, and it passed! But while testers are intelligent and computers are not, they are frequently less predictable. What did the tester ACTUALLY do? Well, let's ask them.

Well, let's say that they said that they first checked the test data for a test1 user, and tried a valid login to confirm this. The system displayed "valid password" and gave them access.

Let's say that they said that they logged out then tried a bad password and found that it prevented them from logging in, and gave a useful, expected error message.

Let's say that they said that they tried to repeat the test a few times, and found the same behaviour.

What does pass mean to you now?

Feel better? I kind of feel better. But I think we know enough to question this, by now. What scenarios can you think of where the testing didn't find a problem related to this simple test idea? Here's a few that represent false positives or false negatives:
  • The message doesn't display (or some other problem) on certain browsers
  • The message doesn't display if you click the Login button twice
  • When trying to log in with different passwords the system was just presenting the same screen instead of trying to log in
  • The tester is testing an old version of the software that works, but the functionality is broken in the latest version.
Of more interest here are some that represent problems that weren't found:
  • Every time the tester fails to log in it increments a "failed logins" value in the database. It's stored as a byte, so when it reaches 255 it throws a database error.
  • The value mentioned above is responsible for locking out the user after 10 tries, so after 10 tries it ALWAYS displays "Invalid Login" on the screen, even with a valid login.
It's a fun experiment thinking of all the ways the tester didn't find existing problems while reporting a pass.

Guess what I (the tester) actually did to test the login page? That's right, nothing. I made it up. You should never have trusted me.

And what wasn't I testing? What system was I trying to test? How important is it that this works? Is it a small website to sell shoes, or a government site that needs to withstand foreign attacks?

A Passing Interlude

So let's review our meaning of "pass".

It seems to give the impression of confidence, coverage and a lack of problems.

It should give the impression of no found problems - which by itself is of exceedingly little value. Unless you know what happened and why that's important as far as "no found problems" is concerned you can't tell the difference between "good coverage and risk assessment" and "I didn't actually do any testing". Remember my test report, and my big green PASS? I didn't do any testing. The "PASS" by itself has no value. A non-tester might try one happy-path test and write PASS on their report having done no real investigation of the system.

If you're interested in a better way to report your testing then I recommend this Michael Bolton post as a jumping off point.

"PASS" is closer to meaning "Whatever we did to whatever we did it to, with however much we understand the system and however much we understand what it's supposed to do and whatever capability we have to detect problems we did not, on this occasion, with this version of this software on this platform find any problems."

I've focused on "pass" here, to show how weak a consideration it can be, and how much complexity it can obscure, but I'm going to leave it as homework to consider your version of "fail". What does "fail" mean to you? Do you use it as a jumping off point for investigations? Why don't you use "pass" as a similar jumping off point? How are your coded checks written - hard to pass or hard to fail? Why?

What Do I Do?

Remember what you're interested in. Don't chase the result of checks, chase down potential problems. Investigate, consider risk, empathise with users and learn the product. Consider problems for humans, not pass/fail on a test report. Use your sense and skill (and ethics) to avoid the pitfalls of dangerous miscommunication.

One of our jobs as testers is to dispel illusions people have about the software. People have illusions about the software because of various levels of fear and confidence about the product. Let's not create false fear and false confidence in our reporting - understand what pass and fail really are and communicate your testing responsibly.

What does pass mean to you now?

Monday, July 28, 2014

It's Just Semantics

Why is it so important to say exactly what we mean? Checking vs testing, test cases aren't artefacts, you can't write down a test, best practice vs good practice in context - isn't it a lot of effort for nothing? Who cares? 

You should. If you care about the state of testing as an intellectual craft then you should care. If you want to do good testing then you should care. If you want to be able to call out snake oil salesmen and con artists who cheapen your craft then you should care. If you want to be taken seriously then you should care. If you want to be treated like a professional person and not a fungible data entry temp then you should care.

Brian needs to make sure that their software is okay. The company doesn't want any problems in it. So they hire someone to look at it to find all the problems. Brian does not understand the infinite phase space of the testing problem. Brian wants to know how it's all going - if the money the tester is getting is going somewhere useful. He demands a record of what the tester is doing in the form of test cases because Brian doesn't understand that test cases don't represent testing. He demands pass/fail rates to represent the quality of the software because Brian doesn't understand that pass/fail does not mean problem/no problem. The test cases pile up, and writing down the testing is becoming expensive so Brian gets someone to write automated checks to save time and money because Brian doesn't understand that automated checks aren't the same as executing a test case. So now Brian has a set of speedy, repeatable, expensive automated checks that don't represent some test cases that don't represent software testing and can only pretend to fulfil the original mission of finding important problems. I won't make you sit through what happens in 6 months when the maintenance costs cripple the project and tool vendors get involved.

When we separate "checking" and "testing", it's to enforce a demarcation to prevent dangerous confusion and to act as a heuristic trigger in the mind. When we say "checking" to differentiate it from "testing" we bring up the associations with its limitations. It protects us from pitfalls such as treating automated checks as a replacement for testing. If you agree that such pitfalls are important then you can see the value in this separation of meaning. It's a case of pragmatics - understanding the difference in the context where the difference is important. Getting into the "checking vs testing" habit in your thought and language will help you test smarter, and teach others to test smarter. Arguing over the details (e.g. "can humans really check?") is important so we understand the limitations of the heuristics we're teaching ourselves, so that not only do we understand that testing and checking are different in terms of meaning, but we know the weaknesses of the definitions we use.

So, we argue over the meaning of what we say and what we do for a few reasons. It helps us talk pragmatically about our work, it helps to dispel myths that lead to bad practices, it helps us understand what we do better, it helps us challenge bad thinking when we see it, it helps us shape practical testing into something intellectual, fun and engaging, and it helps to promote an intellectual, fun and engaging image of the craft of testing. It also keeps you from being a bored script jockey treated like a fungible asset quickly off-shored to somewhere cheaper.

So why are people resistant to engaging with these arguments?

It's Only Semantics

When we talk about semantics let's first know the difference between meaning and terminology. The words we use contain explicature and implicature, operate within context and have connotation. 

Let's use this phrase: "She has gotten into bed with many men".

First there's the problem of semantic meaning. Maybe I'm talking about bed, the place where many people traditionally go in order to sleep or have sex, and maybe you think I'm talking about an exclusive nightclub called Bed.

Then there's the problem with linguistic pragmatic meaning (effect of context on meaning). We might disagree on the implication that this women had sex with these men. Maybe it's an extract of a conversation about a woman working with a bed testing team where she's consistently the only female and the resultant effects of male-oriented bed and mattress design.

Then there's the problem with subjective pragmatic meaning. We might agree that she's had sex with men, but might disagree on what "many" is, or what it says about her character, or even agree that I'm saying that she's had sex with many men where I am implying something negative about her character but you disagree because you've known her for years and she's very moral, ethical, charitable, helpful and kind.

So where we communicate we must understand the terms we use (in some way that permits us to transmit information, so when I say "tennis ball" you picture a tennis ball and not a penguin), and we must appreciate the pragmatic context of the term (so when I say "tennis ball" you picture a tennis ball and not a tennis-themed party).

When we say "it's only a semantic problem" we're inferring that the denotation (the literal meaning of the words we use) is not as important as the connotation (the inference we share from the use of the words).

Firstly, to deal with the denotation of our terminology. How do we know that we know what we mean unless we challenge the use of words that seem to mean something completely different? Why should we teach new people to the craft the wrong words? Why do we need to create a secret language of meaning? Why permit weasel words for con artists to leverage?

Secondly, connotation. The words we use convey meaning by their context and associations. Choosing the wrong words can lead to misunderstandings. Understanding and conveying the meaning IS the important thing, but words and phrases and sentences are more than the sum of their literal meaning, and intent and understanding of domain knowledge do not always translate into effective communication. Don't forget that meaning is in the mind of the beholder - an inference based on observation processed by mental models shaped by culture and experience. It's arguments about the use of words that lead to better examination of what we all believe is the case. For example some people say that whatever same-sex marriage is it should be called something else (say, "civil union") because the word marriage conveys religious or traditional meaning that infers that it is between a man and a woman. The legal document issued is still the marriage licence. We interpret the implicature given the context of the use of these words which affects the nature of the debate we have about it. We can argue about "marriage" - the traditional religious union, and we can argue about "marriage" - the relationship embodied in national law and the rights it confers. We can also talk about "marriage" - the practical, day to day problems and benefits and trials and goings on of being married.


Things Will Never Change

Things are changing now. Jerry Weinberg, James Bach, Michael Bolton and many others' work is rife with challenging the status quo, and now we have a growing CDT community with increasing support from people who want to make the testing industry respected and useful. There's work in India to change the face of the testing industry (Pradeep Soundararajan's name comes to mind). There's STC, MoT, AST and ISST. The idea that a problem won't be resolved is a very poor excuse not to be part of moving towards a solution. Things are always changing. We are at the forefront of a new age of testing, mining the coalface of knowledge for new insight! It's an incredibly exciting time to be in a changing industry. Make sure it changes for the better.


It's The Way It's Always Been

Yes, that's the problem.


I'll Just Deal With Other Challenges

Also known as "someone else will sort that out". Fine, but what about your own understanding? If you don't know WHY there's a testing vs checking problem then you're blinding yourself to the problems behind factory testing. If you don't investigate the testing vs checking problem then you won't know why the difference between the two is important to the nature of any solution to the testing problem, and you're leaving yourself open to be hoodwinked by liars. Understand the value of what you're doing, and make sure that you're not doing something harmful. At the very least keep yourself up to date with these debates and controversies.

There's another point to be made here about moral duty in regard to such debates and controversies. If you don't challenge bad thinking then you are permitting bad thinking to permeate your industry. If we don't stand up to those who want to sell us testing solutions that don't work then they will continue to do so with complete repudiation. I think that this moral duty goes hand-in-hand with a passion for the craft of testing, just as challenging racist or sexist remarks goes hand-in-hand with an interest in the betterment of society as we each see it. It's okay to say "I'm not going to get into that", but if you claim to support the betterment of the testing industry and its reputation as an intellectual, skilled craft then we could really use your voice when someone suggests something ridiculous, and I'd love to see more, innovative approaches to challenging bad or lazy thinking.

"Bad men need nothing more to compass their ends, than that good men should look on and do nothing." - John Stuart Mill


Sounds Like A Lot Of Effort

Testing is an enormously fun, engaging, intellectual activity, but it requires a lot of effort. Life is the same way. For both of these things what you get out of it depends on what you put in.


Arguments Are Just Making Things Less Clear/Simple

This is an stance I simply do not understand. The whole point of debate is to bash out the ideas and let reason run free. Clear definitions give us a starting point for debate, debate gives us rigour to our ideas. I fail to see how good argument doesn't make things better in any field.

As for simplicity, KISS is a heuristic that fails when it comes to the progress of human knowledge. We are concerned here with the forefront of knowledge and understanding, not implementing a process. Why are we so eager to keep it so simple at the expense of it being correct? I've talked to many testers and I've rarely thought that they couldn't handle the complexity of testing, as much as they could the complexity of life. I don't know who we're protecting with the idea that we must keep things simple, especially when the cost is in knowledge, understanding and the progression of our industry.


P.S.
I've been reluctant to release this post. It represents my current thinking (I think!), but I think it's uninformed which may mean that it's not even-handed. Linguistics and semantics are fairly new to me. I'm reading "Tacit and Explicit Knowledge" at the moment, I may come back and refine my ideas. Please still argue with me if you think I'm wrong, I have a lot to learn here.