Image may be NSFW.
Clik here to view.Potentially contentious statement: Software Testing Management is wrong.
That’s the idea presented by a recent post on the Lessons Learned by a Software Tester blog. Paul’s (the author) point is that a test management process does not, in-of-itself, lead to high quality software. He writes:
Test Management systems manage and measure some of the testing activities performed on a project. Completing your testing activities tells me nothing about the overall “Quality” of the product. By definition, it is an incomplete part of the picture.
By putting your faith in Test Management plans or systems you are effectively saying “I don’t know how to measure what you want (i.e. Quality), so I will measure what I can do (i.e. Test).”
Does this mean it’s pointless to track the testing you’ve done? NO! I am not saying that. If you do something that you believe contributes to the value of the project, then please track what you are doing so that others can see what you have done. Preferably, make your progress visible in some way.
What we need to focus on is what the customer needs. Ask yourself: What problem are they trying to solve? What are we trying to deliver that is of value to them? …
So, if you somehow create a super Test Management Plan or system that tracks all of these activities for a given project, then would you have a Quality product? That would be really cool, by the way, but no, not necessarily. And don’t fool yourself into thinking that it will.
Basically, don’t get so bogged down in test management (the process) that you overlook actual quality. Checking boxes on your testing to-do list isn’t the same thing as working to make sure software is actually of high quality – bug free (at least mostly), efficient, effective and achieving its goal.
People seem to periodically remember that quality is the desired result of quality assurance. Between remembering, though, they get bogged down in once good practices and processes. Waterfall development made sense at one point in time – when the SDLC ran on a much more drawn out time frame. But then Waterfall Development became inefficient and too process laden. The answer was Agile development, which allowed teams to move faster and be more flexible. As Alister Scott wrote in response to the original post, Agile largely did away with the clunkiness of test management and streamlined the process – tying testing more closely to the desired end product and the process of getting there.
Now, each agile team is responsible for its own quality, the tester advocates quality and encourages activities that build quality in such as accurate acceptance criteria, unit testing, automated acceptance testing, story testing and exploratory testing. These activities aren’t managed in a test management tool, but against each user story in a lightweight story management tool (such as Trello or Mingle). The tester is responsible for managing his/her own testing. …
Step by Step test cases (such as those in Quality Center) are no longer needed as each user story has acceptance criteria, and each team writes automated acceptance tests written for functionality they develop which acts as both automated regression tests and living documentation.
But even Agile is beginning to present problems. Teams are getting too caught up in the concept that Agile Development doesn’t have long meetings, demanding master plans or much (if any) documentation. By holding onto these concepts so strongly, teams are forgetting the rationale behind the methodology – it’s supposed to be about make better software faster. Do short meetings, loose planning and no documentation automatically result in better quality? No. A dedication to understanding what users want and a commitment to quality assurance (not just testing) is the path to good quality software. Paul echoes this sentiment in a call to testers:
Testers: if you want to do a great job, focus on the testing activities that explore the systems from the perspectives of the people who matter. Never hide or bury your work (i.e. in documents, spreadsheets, test management systems, etc.). Make it visible – use dashboards, mind maps or other visual mediums because they assist with team collaboration and understanding. Keep detailed records archived somewhere if you need them, but don’t worry about “managing” to those fiddly bits as an indicator of “quality” – because they’re not.
No matter what development methodology you use, part of working toward a quality product is taking testing out of its isolated box. Addressing testing as an independent entity or leaving it until the last minute means you have very few options other than a checklist or traditional test management practices and constraints. Instead, work testing in throughout the planning, development and deployment processes.
Like all other members of your staff, your testers are specialized in what they do, and part of what they do is make sure software works and will meet the needs and standards of your consumers. Take advantage of this expertise by allowing testers to work with developers, planners or even stack holders. (James Bach recently came up with an interesting way to incorporate testers into the development process that ends up benefiting both parties.) The more testers understand what the software is intended to do and what the company wants out of its end product, the more they’ll be able to help the organization achieve a quality product that will please everyone.