Excuses get in the way

I know, every excuse is just an excuse on failing to prioritise, but sometimes the prioritising actually gets you nailed down to something where you just have to concentrate and work on.  This week has been one of those.

So to say, releases flowing in from doors and windows and I find myself testing (or wanting to test) them all.

Which of course has meant that I haven’t been able to fulfil the 30 Days of Testing assignments. Currently I am lagging behind 1½ – 2 days. My plan is to get back on the track during this week, anyhow, meaning that I’ll do something during the weekend.

This is just to inform that I am aware of the situation.

Besides that, I ended up going through this tutorial yesterday and realised that this mochaJs-thing seems to be a neat way to learn JavaScript and some test development 😀 I might even give it a more thorough run later on. I also discussed with the author (Viktor Johansson) on collaborating and creating some neat tutorial with BDD & Robot Framework. Oh, and managed to install Skype on the Fedora, which is always an accomplishment 😉

We’ll see what tomorrow brings.

Crazy Testing

 

Last week I joined 30 Days Of Testing -challenge that was organized by Ministry of Testing. And I have to admit it has been fun. Besides that I have found myself doing things I should’ve been doing all the time, I’ve also done something new, too.

First of all the ‘Listen to a testing podcast‘ was very revealing as well as fruitful. I’ve been now listening to the Test Talks while riding bike to and from work and found out new ideas, new ways to think about testing. It is really refreshing to notice that, even though I’ve been working as a single tester (and I’ve done that on purpose, also)  quite a lot for the past years, there’s a community out there. People thinking, if not like me, but at least similar kind of issues and situations that I am facing in my profession. So to say, it has been a blast to notice that 😀

Yesterday’s challenge was to do a ‘Crazy Test’. So, what I did was the following:

  1. Connect iPhone to the car with Apple CarPlay
  2. Use Siri to start music (‘Play music from artist Metallica‘)
  3. Wait until Siri gives you a ping that it has received the command (but before it confirms what it is about to do)
  4. Put on reverse gear

Now, my expected result would be that I could reverse in the car by watching the rear camera and listening to music (at this example, Metallica). Of course it didn’t do anything like that. I received the ping from Siri to indicate that it had received my message, but when the rear camera was switched on, the command was weirdly gone. I could reproduce the behavior even when making a phone call request.

Now one might thing it is a safety feature. But I can’t think it that way, for me it feels like a bug. Software is not working the way I expect it to do. If I turn on the phone and call manually and start to reverse, the call is connected. As well when it comes to music. I can pretty well listen to music (even Metallica) and reverse at the same time.

Now, is this a race condition, or something else? Where is the problem? In Siri/Apple CarPlay? In the API between car and carplay? Has this scenario been tested? Does not seem likely. Or if it has been tested, perhaps the results have been ignored in order to get the software out in time.  Or the responsibility of the behavior has not been clear; should it be car that takes care of this, or the phone?

You might also say (and I wouldn’t argue against that) that this wasn’t particularly crazy test. More or less exploratory boundary/edge case. I was not driving my car sideways on the ice, or 110 Km/h on the highway. I don’t actually care. It was crazy enough for me. Plus that I noticed something new, learned something new, which I suppose should be the main point in life and work altogether. i mean, you can never be crazy enough, not unless you learn more and get crazier.

And I’d like to automate that test. For real 😀

 

 

D3 – 30 Days Of Testing – Listen to a testing podcast

Yesterday’s challenge was to listen to a testing podcast. So I went to iTunes Store, which is the de facto standard when it come to podcasts, at least in my books. In fact, the sole reason for me using iPhone, to be honest. Otherwise I don’t care (not any more) which manufacturer or OS my phone has. After testing Symbian (Nokia & SonyEricsson used that OS in their phones) and using Android & iOS, I’m getting more and more aware that most of them have their good and bad downsides (yes, that’s what I meant).

I do have, sometimes irritating, tendency to get lost while babbling. And that is precisely what happened up there. Now I gather myself and get back to the track.

So, I found the podcast. It is Joe Colantonio’s Test Talk. A series of podcasts on testing and test automation. Since the subject is somewhat intriguing, I ended up subscribing to that, too. Seems to be long enough episodes to listen during the way to work, whether I ride my bike or the subway.

I ended up listening the episode where Rosie Sherry was attending as guest. And I was positively surprised. Not to mean that I was expecting something lousy, but the thing is that the episode made me think. Which is always a good thing. Regardless on what I said yesterday.

She was talking the importance of testing instead of testing tools, and I found myself agreeing. Even though this blog has been about the tools, I realised that my main focus has been telling that regardless of the tools, the skill of testing is what matters. Or at least that is what I should’ve been saying; for that is what I actually mean about the blunt instruments. They are just tools we use, tools in order to help us provide the knowledge about the behaviour (sometimes even vision of quality) of the software we’re testing. At least I think I should be pushing my writing to that one.

What I mean here is that subconsciously I have thought the same way (Now I hope I got it right, too), but not being completely aware of that. It’s not the tools, it’s the way you use them.

Funny thing is that the other day, few days ago, I got a question in Twitter from a former colleague about what SW test automation books he should be reading. And all I could say was  ‘How Google Tests Software’, ‘ATDD by Example’ & ‘Lessons learned in Software Testing’. To be honest, I don’t know much more. I’ve been using the tools, not reading about them. And that has always been my approach in life, in general. Experiment, fix on the run, read when you need. I don’t read the manuals, not before I don’t know what to do. Sometimes I read them too late, maybe too shallowly, and skip some important stuff. I just don’t seem to get myself working the other way. I am an experimental tester, as it seems.

Besides the tools, Joe lifted up also the communities Rosie has been founding: The Ministry Of Testing, Testing Dojo & Software Testing Club. And immediately (after getting to work) I found myself browsing more information about the Dojo and the TestBash happenings etc. I was hooked, there’s a community of testers! Seems to be I’ve been living in my own man -cave for few years now. Let’s see what the communities and the future brings up. At least now I feel excited 😀

So, thanks for Rosie & Joe for showing me one more door to the world of testing !

 

 

Testing in popular culture

This morning, while browsing for a podcast for this 30 Days Of testing -challenge, I found myself thinking. Yes I know, it’s a harsh condition and I try to avoid it regularly, but one can’t help oneself. Not when you’re born this way, you know, with brains and all.

Anyhow, I started thinking, who was the test engineer in Starfleet Academy that tested Kobayashi Maru? Clearly, for it being a computer scenario, it should’ve been tested. How otherwise could’ve James T. Kirk beaten it, even by cheating? And furthermore, in the ship, there’s only operators available, mainly. Someone must have been done a hell of a coding in order to get the USS Enterprise to get around the orbit in the first place. And, as we all know, when there’s a developer there should be a tester available.

Which brings me to this: Is there testing involved in popular culture? In the ‘Saving Matt Damon’ – movie, The Martian (which by the way is a great book, not as good movie, Ridley Scott blew it), NASA does skip the tests, based on risk assessment, apparently,  in order to send the supplies to the Mars a bit more earlier, even the length of the tests is slightly discussed. That’s most likely due to that the author of the book is, if I recall correctly, a SW engineer.

But is there more QA/Test references in popular culture? If not, why not? We’re working on a field of SW (Well, HW needs to be tested, too, but that’s another story) and the field gets bigger and bigger all the time. It is clearly so, nowadays, that hackers and developers can actually tell people what their profession is and people in general have some sort of clue what they are doing for living.

Me, in the other hand, if I tell my profession to my relatives, am faced with a puzzled smile and slightly confused glare. Which I can buy, nobody seems to know what this testing is and what it is all about. Getting more testing stories in popular culture would actually help a bit.

It would be interesting to know, if I’m wrong here. What I know, is the narrow field of popular culture I’ve been following. I might be completely wrong, which is always all so human.

By the way, did you know that the Chernobyl disaster  was caused by running tests? Some experiments, as it seems. Sounds slightly like exploratory testing to me. It’s always a refreshing thought when someone bashes around the nuclear plant systems.

Just one more: Who was the guy, who tested the Death Star particle exhaust vents security? And who approved the solution?

Note: And the answer comes from the  deeps of Twitter:

Thanks, @Marcel_Gehlen

PS. I did find the podcast, too.

Notes & reflections on ‘How Google Tests Software’

‘The mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated.’
 
This is pretty much the approach we should have at FO. And luckily, we do. Either unit- or functionality tests are automated, when appropriate, the rest is tested manually, using exploratory testing.
The exploratory testing we do has at least one flaw. We’re missing a way to record and replay the tests in general (Widget has it’s own record/play tool, which is ok). There is tools for that, but currently we miss the practical implementation of them.
This reminds me also that the test cases need to be reviewed by the developer and by the product owner (requirement responsible) before they are automated. In here, the TDD approach would help a lot. It would not hurt to review them afterwards either.
In the current testing culture we have, the problem I see is that people are only interested that the delivery is done. And with delivery, it is mainly the development effort that is counted in. The tests are done, if they are, late. usually too late. Sometimes after the release (as what I am testing now). Then we’re also missing a review or walk through of the test results. There seems to be a slight lack of interest in results of tests per release, the test reports at the moment are not written, there’s mainly no one to read them either. Failing test cases are spotted too late, so that the pressure to push the release to production is so big that the results are, if not ignored, at least overlooked and set aside for later look. Mainly the later look rarely happens.
Now, I could go on and complain, nag and mumble about this. But I do think there could be a better solution for this.
What we need to have is a knowledge that the test results exist. that’s the first one. Well, actually, the results and reports need to be written down at first place. Secondly, we need to walk them through. And the third one is to do this in the right time.
Now, I am both fortunate and unfortunate to be (currently) the only test engineer at the company. That being said, there’s plenty of testing going on and done by the developers, automated and manual. Being the single resource for testing means that I am actually quite busy, all the time but it also means that I am not tied to the projects, I cannot be, I need to be flexibly moving around all the time. Now, this gives me slight freedom when it comes to releases. I do have time to give the focus to the results and reports. What I could do is to start to show the results and store the reports actively, all the time, with the active projects we have. All I need is a platform to do that. And I think the Monday stand-up would be a great  place to start.
In this case the reports or the active projects won’t actually matter. What matters is to show what we’ve found. And after awhile, I’m pretty sure, people are getting more knowledge, gathering interest, get the feeling of the quality to exist and we will eventually get more focus on testing.
‘We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors, and files bug automatically’
Now here, I could argue about the e-mail -thing. Mail has gone outdated when it comes to bug reporting and such. Sending mail is only creating clutter. I see it so that we should automatically create bug reports/issues/tickets and assign them to the person responsible for failing tests. That should be all. The test responsible should then prioritize the issue, check if it fails due to a bug/defect in the test system or test case, fix the issue if appropriate, or re-assign the issue to the actual developer. And all this could be done with a issue -tracking tool (like JIRA, Redmine etc) without enormous spamming.