D3 – 30 Days Of Testing – Listen to a testing podcast

Yesterday’s challenge was to listen to a testing podcast. So I went to iTunes Store, which is the de facto standard when it come to podcasts, at least in my books. In fact, the sole reason for me using iPhone, to be honest. Otherwise I don’t care (not any more) which manufacturer or OS my phone has. After testing Symbian (Nokia & SonyEricsson used that OS in their phones) and using Android & iOS, I’m getting more and more aware that most of them have their good and bad downsides (yes, that’s what I meant).

I do have, sometimes irritating, tendency to get lost while babbling. And that is precisely what happened up there. Now I gather myself and get back to the track.

So, I found the podcast. It is Joe Colantonio’s Test Talk. A series of podcasts on testing and test automation. Since the subject is somewhat intriguing, I ended up subscribing to that, too. Seems to be long enough episodes to listen during the way to work, whether I ride my bike or the subway.

I ended up listening the episode where Rosie Sherry was attending as guest. And I was positively surprised. Not to mean that I was expecting something lousy, but the thing is that the episode made me think. Which is always a good thing. Regardless on what I said yesterday.

She was talking the importance of testing instead of testing tools, and I found myself agreeing. Even though this blog has been about the tools, I realised that my main focus has been telling that regardless of the tools, the skill of testing is what matters. Or at least that is what I should’ve been saying; for that is what I actually mean about the blunt instruments. They are just tools we use, tools in order to help us provide the knowledge about the behaviour (sometimes even vision of quality) of the software we’re testing. At least I think I should be pushing my writing to that one.

What I mean here is that subconsciously I have thought the same way (Now I hope I got it right, too), but not being completely aware of that. It’s not the tools, it’s the way you use them.

Funny thing is that the other day, few days ago, I got a question in Twitter from a former colleague about what SW test automation books he should be reading. And all I could say was  ‘How Google Tests Software’, ‘ATDD by Example’ & ‘Lessons learned in Software Testing’. To be honest, I don’t know much more. I’ve been using the tools, not reading about them. And that has always been my approach in life, in general. Experiment, fix on the run, read when you need. I don’t read the manuals, not before I don’t know what to do. Sometimes I read them too late, maybe too shallowly, and skip some important stuff. I just don’t seem to get myself working the other way. I am an experimental tester, as it seems.

Besides the tools, Joe lifted up also the communities Rosie has been founding: The Ministry Of Testing, Testing Dojo & Software Testing Club. And immediately (after getting to work) I found myself browsing more information about the Dojo and the TestBash happenings etc. I was hooked, there’s a community of testers! Seems to be I’ve been living in my own man -cave for few years now. Let’s see what the communities and the future brings up. At least now I feel excited 😀

So, thanks for Rosie & Joe for showing me one more door to the world of testing !

 

 

Testing in popular culture

This morning, while browsing for a podcast for this 30 Days Of testing -challenge, I found myself thinking. Yes I know, it’s a harsh condition and I try to avoid it regularly, but one can’t help oneself. Not when you’re born this way, you know, with brains and all.

Anyhow, I started thinking, who was the test engineer in Starfleet Academy that tested Kobayashi Maru? Clearly, for it being a computer scenario, it should’ve been tested. How otherwise could’ve James T. Kirk beaten it, even by cheating? And furthermore, in the ship, there’s only operators available, mainly. Someone must have been done a hell of a coding in order to get the USS Enterprise to get around the orbit in the first place. And, as we all know, when there’s a developer there should be a tester available.

Which brings me to this: Is there testing involved in popular culture? In the ‘Saving Matt Damon’ – movie, The Martian (which by the way is a great book, not as good movie, Ridley Scott blew it), NASA does skip the tests, based on risk assessment, apparently,  in order to send the supplies to the Mars a bit more earlier, even the length of the tests is slightly discussed. That’s most likely due to that the author of the book is, if I recall correctly, a SW engineer.

But is there more QA/Test references in popular culture? If not, why not? We’re working on a field of SW (Well, HW needs to be tested, too, but that’s another story) and the field gets bigger and bigger all the time. It is clearly so, nowadays, that hackers and developers can actually tell people what their profession is and people in general have some sort of clue what they are doing for living.

Me, in the other hand, if I tell my profession to my relatives, am faced with a puzzled smile and slightly confused glare. Which I can buy, nobody seems to know what this testing is and what it is all about. Getting more testing stories in popular culture would actually help a bit.

It would be interesting to know, if I’m wrong here. What I know, is the narrow field of popular culture I’ve been following. I might be completely wrong, which is always all so human.

By the way, did you know that the Chernobyl disaster  was caused by running tests? Some experiments, as it seems. Sounds slightly like exploratory testing to me. It’s always a refreshing thought when someone bashes around the nuclear plant systems.

Just one more: Who was the guy, who tested the Death Star particle exhaust vents security? And who approved the solution?

Note: And the answer comes from the  deeps of Twitter:

Thanks, @Marcel_Gehlen

PS. I did find the podcast, too.

Notes & reflections on ‘How Google Tests Software’

‘The mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated.’
 
This is pretty much the approach we should have at FO. And luckily, we do. Either unit- or functionality tests are automated, when appropriate, the rest is tested manually, using exploratory testing.
The exploratory testing we do has at least one flaw. We’re missing a way to record and replay the tests in general (Widget has it’s own record/play tool, which is ok). There is tools for that, but currently we miss the practical implementation of them.
This reminds me also that the test cases need to be reviewed by the developer and by the product owner (requirement responsible) before they are automated. In here, the TDD approach would help a lot. It would not hurt to review them afterwards either.
In the current testing culture we have, the problem I see is that people are only interested that the delivery is done. And with delivery, it is mainly the development effort that is counted in. The tests are done, if they are, late. usually too late. Sometimes after the release (as what I am testing now). Then we’re also missing a review or walk through of the test results. There seems to be a slight lack of interest in results of tests per release, the test reports at the moment are not written, there’s mainly no one to read them either. Failing test cases are spotted too late, so that the pressure to push the release to production is so big that the results are, if not ignored, at least overlooked and set aside for later look. Mainly the later look rarely happens.
Now, I could go on and complain, nag and mumble about this. But I do think there could be a better solution for this.
What we need to have is a knowledge that the test results exist. that’s the first one. Well, actually, the results and reports need to be written down at first place. Secondly, we need to walk them through. And the third one is to do this in the right time.
Now, I am both fortunate and unfortunate to be (currently) the only test engineer at the company. That being said, there’s plenty of testing going on and done by the developers, automated and manual. Being the single resource for testing means that I am actually quite busy, all the time but it also means that I am not tied to the projects, I cannot be, I need to be flexibly moving around all the time. Now, this gives me slight freedom when it comes to releases. I do have time to give the focus to the results and reports. What I could do is to start to show the results and store the reports actively, all the time, with the active projects we have. All I need is a platform to do that. And I think the Monday stand-up would be a great  place to start.
In this case the reports or the active projects won’t actually matter. What matters is to show what we’ve found. And after awhile, I’m pretty sure, people are getting more knowledge, gathering interest, get the feeling of the quality to exist and we will eventually get more focus on testing.
‘We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors, and files bug automatically’
Now here, I could argue about the e-mail -thing. Mail has gone outdated when it comes to bug reporting and such. Sending mail is only creating clutter. I see it so that we should automatically create bug reports/issues/tickets and assign them to the person responsible for failing tests. That should be all. The test responsible should then prioritize the issue, check if it fails due to a bug/defect in the test system or test case, fix the issue if appropriate, or re-assign the issue to the actual developer. And all this could be done with a issue -tracking tool (like JIRA, Redmine etc) without enormous spamming.

D1 – 30 Days Of Testing

After 45 pages I still feel like this could be applied somehow to my current workplace. There’s currently roughly a 1:15 ratio of testers vs. developers. Nice, neat, challenging and an awesome place to work 😀

IMG_2806

 

 

Joining in (30 Day Testing Challenge)

I know. I’m late, but I have an excuse (as always).

The Ministry Of Testing released a 30 Day Testing Challenge in June 2016. Basically challenge is a list of 30 +1 different test -related actions you should’ve done during July. Unfortunately (or luckily, depending on how you look at things) I was on vacation half of teh time, so I figured I’d have it a go on August instead. And since now it’s August 1st, I assume it’s a good day to start.

In the original post it was somehow implied that the challenge should be done during one calendar month. Since I already broke the initial plan to have the challenge done during July, I might as well ruin it otherwise, too.

My plan is simple. I’m going to do the challenge, in 30 + 1 workdays . This means I’m going to have 6 weeks on doing the challenge. Plus one external day for the last one. I suppose that can be accepted even by the ministry clerks 😀

30-Day-Challenge-1