Quantcast
Channel: Software Testing Blog » Testing Trends
Viewing all articles
Browse latest Browse all 137

6 Testing Mistakes to Avoid Like the Plague

$
0
0

mistakes“It’s okay to make mistakes,” they’ll tell you. “As long as you learn from your mistakes.” Not the worst advice you could get, but a wiser person would tell you that it’s far better to learn from someone else’s mistakes. Wouldn’t you agree?

If so, you’re in luck, because organizations all over the world continue to make the same software testing mistakes over and over again – all so that you don’t have to. While there are far too many to list here, we wanted to share a few of the more common gaffes. Here they are, in no particular order:

Mistake #1: Testing too late

How do you know if you’ve waited too long to test your apps? One sign would be if users are publically complaining via social media about defects they found on their own. When it comes to quality, the sooner you can involve the test team, the better off you are going to be. As you’ll see throughout this post, many of the great testing blunders occur for this very reason. Teams that see testing as the “last line of defense” not only misunderstand the role of QA, but they put themselves and their business in dangerous territory. Brian Marick explains the advantages of testing early:

Tests designed before coding begins can improve quality. They inform the developer of the kinds of tests that will be run, including the special cases that will be checked. The developer can use that information while thinking about the design, during design inspections, and in his own developer testing. Early test design can do more than prevent coding bugs.

Mistake #2: Testing with amateurs

In an ideal world, every testing project would have two types of experts: a testing expert and a domain expert. The reasons for having the testing expert on hand should be obvious (to readers of this blog anyway). The great testers understand, regardless of the product, where flaws and vulnerabilities are likely to be found. They are, as James Bach has said, professional skeptics. Domain experts on the other hand might not know much about testing per se, but they will immediately pick up on inconsistencies and shortcomings in the product from a feature/functionality perspective. Unfortunately, many companies are making the mistake – the BIG mistake – of having neither a testing expert nor a domain expert on their team. Don’t be one of them.

Mistake #3: Testing without a scope

“Just look for bugs and tell me what you find.” Famous last words! While this type of “improv” testing can often yield quality results, it should never be standard operating procedure. Unfortunately it is inside many companies. A note to testers: If this is the direction you’re asked to go in, here is some good advice from Jon Bach:

There are *always* requirements, but they aren’t always written. They are both implicit and explicit.  Some are hidden, some may be obvious.  Some take time to emerge, others hit you right away.  You find them and they find you. You’ll know you found one when you sense a user has an unmet need.

Techniques for testing without requirements:

  • Ask stupid questions
  • Read the FAQ
  • Look at competing products
  • Read out-of-date documents
  • Invite a developer to the whiteboard, then diagram your assumptions of the architecture or some states

Testing is an active search, not just for bugs, but for requirements.  Approach your requirements-gathering the same way you approach testing.  You have to go find the bugs – they often won’t reveal themselves to you on their own.

Mistake #4: Testing “one and done”

A lot of time and energy goes into a software launch. So it’s only natural to relax a bit when the application is released to users. It may seem as though all is said and done from a testing perspective, but it ain’t. As we’ve seen time and again, many of the more serious bugs and issues crop up well after a product launches, often times through no fault of the engineering team. Maybe the application isn’t synching with a recently updated third party app. Maybe a browser setting changed. The list is endless. Whatever the circumstance, companies would do well to avoid the mistake of failing to run regression tests.

Mistake #5: Testing in a controlled environment

These days, users are likely to consume your application under all sorts of various conditions; an endless combination of devices, operating systems, browsers, carriers and locations. Why then do so many companies only test their applications within the confines of a highly controlled test lab? Why are they not moving a portion of their testing into the real world? There was a time when companies could get away with 100% on-site testing – and many did. Unfortunately, many continue this practice (old habits die hard, after all) resulting in production bugs that could have easily been found by testers in-the-wild.

Mistake #6: Testing too fast/slow

The Agile testers can’t seem to catch their breath, while the waterfall testers look like they’re about to die of boredom. Our last testing mistake has to do with the pace of testing – specifically, why many organizations can’t seem to keep it consistent. Much of the problem (again) boils down to the misperception of QA. When testing is seen as the last step in a long process (usually with a looming deadline) it’s natural for testers to feel rushed. When the role of QA is properly defined and agreed upon, testing will never be rushed and it will never slow to a crawl. Instead, it will become an integral part of the organization, not merely a department being waited on.

Of course these mistakes are all matters of opinion, and so we’d love to hear yours. What are the biggest testing mistakes in your view? Please share your thoughts in the comments section below.

Note: This post was inspired by the timeless Classic Testing Mistakes by Brian Marick. Give it a read if you haven’t already.


Viewing all articles
Browse latest Browse all 137

Latest Images

Trending Articles





Latest Images