‘The mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated.’
This is pretty much the approach we should have at FO. And luckily, we do. Either unit- or functionality tests are automated, when appropriate, the rest is tested manually, using exploratory testing.
The exploratory testing we do has at least one flaw. We’re missing a way to record and replay the tests in general (Widget has it’s own record/play tool, which is ok). There is tools for that, but currently we miss the practical implementation of them.
This reminds me also that the test cases need to be reviewed by the developer and by the product owner (requirement responsible) before they are automated. In here, the TDD approach would help a lot. It would not hurt to review them afterwards either.
In the current testing culture we have, the problem I see is that people are only interested that the delivery is done. And with delivery, it is mainly the development effort that is counted in. The tests are done, if they are, late. usually too late. Sometimes after the release (as what I am testing now). Then we’re also missing a review or walk through of the test results. There seems to be a slight lack of interest in results of tests per release, the test reports at the moment are not written, there’s mainly no one to read them either. Failing test cases are spotted too late, so that the pressure to push the release to production is so big that the results are, if not ignored, at least overlooked and set aside for later look. Mainly the later look rarely happens.
Now, I could go on and complain, nag and mumble about this. But I do think there could be a better solution for this.
What we need to have is a knowledge that the test results exist. that’s the first one. Well, actually, the results and reports need to be written down at first place. Secondly, we need to walk them through. And the third one is to do this in the right time.
Now, I am both fortunate and unfortunate to be (currently) the only test engineer at the company. That being said, there’s plenty of testing going on and done by the developers, automated and manual. Being the single resource for testing means that I am actually quite busy, all the time but it also means that I am not tied to the projects, I cannot be, I need to be flexibly moving around all the time. Now, this gives me slight freedom when it comes to releases. I do have time to give the focus to the results and reports. What I could do is to start to show the results and store the reports actively, all the time, with the active projects we have. All I need is a platform to do that. And I think the Monday stand-up would be a great place to start.
In this case the reports or the active projects won’t actually matter. What matters is to show what we’ve found. And after awhile, I’m pretty sure, people are getting more knowledge, gathering interest, get the feeling of the quality to exist and we will eventually get more focus on testing.
‘We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors, and files bug automatically’
Now here, I could argue about the e-mail -thing. Mail has gone outdated when it comes to bug reporting and such. Sending mail is only creating clutter. I see it so that we should automatically create bug reports/issues/tickets and assign them to the person responsible for failing tests. That should be all. The test responsible should then prioritize the issue, check if it fails due to a bug/defect in the test system or test case, fix the issue if appropriate, or re-assign the issue to the actual developer. And all this could be done with a issue -tracking tool (like JIRA, Redmine etc) without enormous spamming.