Software Engineer in Test
Update: I'm actually a Sofware Engineer now. But everything else holds up.
I'm a software engineer, working mostly to support testing other software. I don't work at Google, but if I did, my job title would most likely be Software Engineer in Test (SET). At Microsoft or Amazon, it would be SDET (the D is for development). The whole idea of an SET is that we are programmers who work for QA. The software we develop are tools and test systems and build systems and so on, to be used by other QA Engineers who will create end-to-end functional tests.
The Problem
The problem with all of this stems from that 2nd to last sentence above.
The whole idea of an SET is that we are programmers who work for QA.
We have all the same skills and abilities as the regular software developers on the project. We work with the same systems, and the same technologies. But, we're outsiders. We have different managers, different metrics, and different access permissions. Whether it's enforced by technology, policy, or just common convention, we're not supposed to be changing the product; we're just testing it. At this point we're getting to the heart of the problem: We're just testing it.
It's become commonly accepted that software development needs to go faster. It's also common to look at the process of software development and conclude that testing is both slow and repetitive. And so people look at testing and think "let's automate it". On it's own, there's nothing wrong with that. In fact, it's mostly a good thing. But it stops being a good thing when the QA department has to entirely create this automation for themselves. That's not to say QA can't do it, but rather that they shouldn't (and they shouldn't have to).
Testing is part of developing software. Ensuring the software is testable is part of functioning as a mature development group. If nothing else, it's required for unit tests. But really it should be happening all the way up the conceptual hierarchy for component tests, integration, acceptance, and functional tests. At some point there's often a dedicated QA group that takes over for designing and performing these tests. That's fine; you want to have someone else checking your work. But making the software testable should still be done by the developers. There's two reasons for this that are worth talking about. One is because developers are the ones who can. Two is because if the developers can't test what they make, how can they have any idea if they made what they intended. When QA goes to extreme lengths to work around that gap, it's just enabling the lazy behavior of the regular developers on the project.
The Point
So far, this has all been philosophical. Who should be responsible for a given task. There are also practical problems. QA, by design, is an outside entity to development. And so any automation QA builds will necessarily have to operate from outside the system under test. The practical problem is that this out-side-in automation strategy won't work. Or at least it won't scale. The tests will have to rely on too many components and make too many assumptions. When tests fail, too much effort is required to diagnose the reason. When a reason is eventually identified, there will then be too much effort required to isolate the cause. The only way to avoid that is to build it in from the inside. That way components can be isolated, dependencies can be mocked, and tests can fail closer to the source of errors. Accomplishing this on requires the developers of a project to build these testing capabilities along with the system itself. And the best part is that mostly it should be easy. They should already be building these things, and the only change is to expose it for use by outside testers. But of course, that gets back into what people should be doing, and that's a different topic.