Testing is an area of software engineering that
does not historically draw the widespread respect
accorded to "normal" software
engineers. QA and test automation have
historically been seen as less-skilled, more junior roles, perhaps
even drudge work that could be replaced with adequately sophisticated
Industry standard: A QA Manager generally gets paid the same as a Senior Software Engineer
It says something that it’s considered a perk to be hired into a
position where, as a software engineer, one can work on testing tools
but not be publicly identified as “in test.” Believe me, in
my travels as a consultant this is a theme I
have heard a lot with regard to why engineers do/don’t choose to
work with various organizations.
It is demonstrably impossible to build and run a reliable
system without engineers who excel at “maintenance,” also known as
“keeping the site up.” Testing (aka debugging) makes up the better
part of maintenance.
However for historical reasons the software industry suffers from a
persistent condition in which QA doesn’t get the respect it deserves
for the most part, in most organizations. As a corollary,
organizations that do value and reward testing professionals, wind
up recruiting all the (already rare) competent people in the field.
And this is the state of the (Web and mobile) software industry right
now: everyone who is good at being an engineer “in test” already
works for a company that is willing to pay above the industry mean
for testing expertise.
Escaping the stigma associated with being a tester
Once hired, these “engineers who test” have no incentive to publish
publicly nor to be publicly seen as a voice in the testing community
— remember the industry standard is to pay testers less than
other kinds of engineers.
The result is a stagnating industry where almost no one with deep
technical testing knowledge is motivated to share it.
So… does your organization ignore industry standard
salaries and pay testers like their expertise is a rare and valuable
software engineering specialty?
Recently I was sent a job posting that embodied what I will call the Pernicious Myth of the QA Automation Engineer. Here’s just one line:
Own automated testing capabilities across Web, mobile and production processes.
An engineer cannot by definition take responsibility (that’s what “ownership” means — right?) for features that cut across multiple groups. In all but the smallest companies, Web and mobile would be managed by different teams. With different MANAGERS.
Engineers don’t take responsibility for the behavior of MULTIPLE managers across different teams.
This is a Director’s job.
Managing software capabilities across multiple teams is the job of a mid-level Engineering manager such as a director or (in larger organizations) a VP.
Why not? Because decades of computer engineering experience show that it never works.
Management (with all its wondrous hierarchical levels) is responsible for behavior of people within and across teams. Engineers and designers are responsible for the behavior and organization of the product. Not the people. People are a management problem. Especially at scale. Organizations that forget this fail.
Takeaway: do not sign on for a director’s job at an engineer’s salary
I’m not saying no one should take responsibilty for the “tests and testability” of an application or service.
What I am saying is that someone should be explicity responsible for testing across the whole organizaiton and that person should be at the director or executive level. Never at the engineer or team lead level. Ever.
The problem with asking where testing/qa “fit in” to devops is that testing/qa is part of dev.
It’s a historical mistake that test/qa was marginalized to the point it’s now seen as a separate discipline.
In the tech world we worry a lot about scaling. Whenever someone comes up with software innovation, one of the first questions they are likely to hear is: nice concept, but will it scale?
But what is it that does not scale?
In growing systems, technology and process may not scale. In the life cycle of a mature / legacy / successful system, it is communication that does not scale. Communication between individuals and between teams, is universally found to be the bottleneck on execution in very large organizations.
Therefore the fact that CI can scale engineer communication almost indefinitely is critically important! It means that CI is a tool for dealing with diseconomies of scale, at least as pertains to an engineering organization.
Any "IT Crisis" then can be re-understood as hitting the steep rightward end of the diseconomies of scale curve (shown below) with costs spiking at the beginning and end of the life of the organization. The spike at the end is due to communication costs, which again: CI mitigates communication costs, at least for an engineering team.
More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded — indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.
The following is excerpted from Software Testing Techniques, 2d. Ed. by Boris Beizer.
First Law: The Pesticide Paradox
Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.
That’s no too bad, you say, because at least the software gets better and better. Not quite!
Second Law: The Complexity Barrier
Software complexity (and therefore that of bugs) grows to the limits of our ability to manage that complexity.
Corollary to the First Law: Test suites wear out.
Yesterday’s elegant, revealing, effective test suite will wear out because programmers and designers, given feedback on their bugs, do modify their programming habits and style in an attempt to reduce the incidence of bugs they know about. Furthermore, the better the feedback, the better the QA, the more responsive the programmers are, the faster those suites wear out. Yes, the software is getting better, but that only allows you to approach closer to, or to leap over, the previous complexity barrier. True, bug statistics tell you nothing about the coming release, only the bugs of the previous release — but that’s better than basing your test technique strategy on general industry statistics or myths. If you don’t gather bug statistics, organized into some rational taxonomy, you don’t know how effective your testing has been, and worse, you don’t know how worn out your test suite is. The consequences of that ignorance is a brutal shock. How many horror stories do you want to hear about the sophisticated outfit that tested long, hard, and diligently — sent release 3.4 to the field, confident that it was the best tested product they had ever shipped — only to have it bomb more miserably than any prior release?
Gresham’s Law states that counterfeit currency will tend to be exchanged by otherwise honest actors. What does that have to do with software engineering? In the programming world, code is the currency of exchange. So Gresham’s Law in the programming world is: bad code will tend to get written by otherwise intelligent engineers.
How does Gresham’s law apply to test coverage?
Consider the case where engineers are asked by management to contribute unit tests such that code coverage remains at/above a numerical target such as 80%. There is by definition no direct business benefit to providing these tests, since tests are never seen by the customers. Therefore if it is possible to fake test contributions by gaming test coverage metrics, then engineers will tend to regard this subversion as the only ethically viable choice. Time not spent on test coverage is time spent increasing business ROI.