D3 – 30 Days Of Testing – Listen to a testing podcast

Yesterday’s challenge was to listen to a testing podcast. So I went to iTunes Store, which is the de facto standard when it come to podcasts, at least in my books. In fact, the sole reason for me using iPhone, to be honest. Otherwise I don’t care (not any more) which manufacturer or OS my phone has. After testing Symbian (Nokia & SonyEricsson used that OS in their phones) and using Android & iOS, I’m getting more and more aware that most of them have their good and bad downsides (yes, that’s what I meant).

I do have, sometimes irritating, tendency to get lost while babbling. And that is precisely what happened up there. Now I gather myself and get back to the track.

So, I found the podcast. It is Joe Colantonio’s Test Talk. A series of podcasts on testing and test automation. Since the subject is somewhat intriguing, I ended up subscribing to that, too. Seems to be long enough episodes to listen during the way to work, whether I ride my bike or the subway.

I ended up listening the episode where Rosie Sherry was attending as guest. And I was positively surprised. Not to mean that I was expecting something lousy, but the thing is that the episode made me think. Which is always a good thing. Regardless on what I said yesterday.

She was talking the importance of testing instead of testing tools, and I found myself agreeing. Even though this blog has been about the tools, I realised that my main focus has been telling that regardless of the tools, the skill of testing is what matters. Or at least that is what I should’ve been saying; for that is what I actually mean about the blunt instruments. They are just tools we use, tools in order to help us provide the knowledge about the behaviour (sometimes even vision of quality) of the software we’re testing. At least I think I should be pushing my writing to that one.

What I mean here is that subconsciously I have thought the same way (Now I hope I got it right, too), but not being completely aware of that. It’s not the tools, it’s the way you use them.

Funny thing is that the other day, few days ago, I got a question in Twitter from a former colleague about what SW test automation books he should be reading. And all I could say was  ‘How Google Tests Software’, ‘ATDD by Example’ & ‘Lessons learned in Software Testing’. To be honest, I don’t know much more. I’ve been using the tools, not reading about them. And that has always been my approach in life, in general. Experiment, fix on the run, read when you need. I don’t read the manuals, not before I don’t know what to do. Sometimes I read them too late, maybe too shallowly, and skip some important stuff. I just don’t seem to get myself working the other way. I am an experimental tester, as it seems.

Besides the tools, Joe lifted up also the communities Rosie has been founding: The Ministry Of Testing, Testing Dojo & Software Testing Club. And immediately (after getting to work) I found myself browsing more information about the Dojo and the TestBash happenings etc. I was hooked, there’s a community of testers! Seems to be I’ve been living in my own man -cave for few years now. Let’s see what the communities and the future brings up. At least now I feel excited 😀

So, thanks for Rosie & Joe for showing me one more door to the world of testing !



Testing in popular culture

This morning, while browsing for a podcast for this 30 Days Of testing -challenge, I found myself thinking. Yes I know, it’s a harsh condition and I try to avoid it regularly, but one can’t help oneself. Not when you’re born this way, you know, with brains and all.

Anyhow, I started thinking, who was the test engineer in Starfleet Academy that tested Kobayashi Maru? Clearly, for it being a computer scenario, it should’ve been tested. How otherwise could’ve James T. Kirk beaten it, even by cheating? And furthermore, in the ship, there’s only operators available, mainly. Someone must have been done a hell of a coding in order to get the USS Enterprise to get around the orbit in the first place. And, as we all know, when there’s a developer there should be a tester available.

Which brings me to this: Is there testing involved in popular culture? In the ‘Saving Matt Damon’ – movie, The Martian (which by the way is a great book, not as good movie, Ridley Scott blew it), NASA does skip the tests, based on risk assessment, apparently,  in order to send the supplies to the Mars a bit more earlier, even the length of the tests is slightly discussed. That’s most likely due to that the author of the book is, if I recall correctly, a SW engineer.

But is there more QA/Test references in popular culture? If not, why not? We’re working on a field of SW (Well, HW needs to be tested, too, but that’s another story) and the field gets bigger and bigger all the time. It is clearly so, nowadays, that hackers and developers can actually tell people what their profession is and people in general have some sort of clue what they are doing for living.

Me, in the other hand, if I tell my profession to my relatives, am faced with a puzzled smile and slightly confused glare. Which I can buy, nobody seems to know what this testing is and what it is all about. Getting more testing stories in popular culture would actually help a bit.

It would be interesting to know, if I’m wrong here. What I know, is the narrow field of popular culture I’ve been following. I might be completely wrong, which is always all so human.

By the way, did you know that the Chernobyl disaster  was caused by running tests? Some experiments, as it seems. Sounds slightly like exploratory testing to me. It’s always a refreshing thought when someone bashes around the nuclear plant systems.

Just one more: Who was the guy, who tested the Death Star particle exhaust vents security? And who approved the solution?

Note: And the answer comes from the  deeps of Twitter:

Thanks, @Marcel_Gehlen

PS. I did find the podcast, too.

Notes & reflections on ‘How Google Tests Software’

‘The mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated.’
This is pretty much the approach we should have at FO. And luckily, we do. Either unit- or functionality tests are automated, when appropriate, the rest is tested manually, using exploratory testing.
The exploratory testing we do has at least one flaw. We’re missing a way to record and replay the tests in general (Widget has it’s own record/play tool, which is ok). There is tools for that, but currently we miss the practical implementation of them.
This reminds me also that the test cases need to be reviewed by the developer and by the product owner (requirement responsible) before they are automated. In here, the TDD approach would help a lot. It would not hurt to review them afterwards either.
In the current testing culture we have, the problem I see is that people are only interested that the delivery is done. And with delivery, it is mainly the development effort that is counted in. The tests are done, if they are, late. usually too late. Sometimes after the release (as what I am testing now). Then we’re also missing a review or walk through of the test results. There seems to be a slight lack of interest in results of tests per release, the test reports at the moment are not written, there’s mainly no one to read them either. Failing test cases are spotted too late, so that the pressure to push the release to production is so big that the results are, if not ignored, at least overlooked and set aside for later look. Mainly the later look rarely happens.
Now, I could go on and complain, nag and mumble about this. But I do think there could be a better solution for this.
What we need to have is a knowledge that the test results exist. that’s the first one. Well, actually, the results and reports need to be written down at first place. Secondly, we need to walk them through. And the third one is to do this in the right time.
Now, I am both fortunate and unfortunate to be (currently) the only test engineer at the company. That being said, there’s plenty of testing going on and done by the developers, automated and manual. Being the single resource for testing means that I am actually quite busy, all the time but it also means that I am not tied to the projects, I cannot be, I need to be flexibly moving around all the time. Now, this gives me slight freedom when it comes to releases. I do have time to give the focus to the results and reports. What I could do is to start to show the results and store the reports actively, all the time, with the active projects we have. All I need is a platform to do that. And I think the Monday stand-up would be a great  place to start.
In this case the reports or the active projects won’t actually matter. What matters is to show what we’ve found. And after awhile, I’m pretty sure, people are getting more knowledge, gathering interest, get the feeling of the quality to exist and we will eventually get more focus on testing.
‘We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors, and files bug automatically’
Now here, I could argue about the e-mail -thing. Mail has gone outdated when it comes to bug reporting and such. Sending mail is only creating clutter. I see it so that we should automatically create bug reports/issues/tickets and assign them to the person responsible for failing tests. That should be all. The test responsible should then prioritize the issue, check if it fails due to a bug/defect in the test system or test case, fix the issue if appropriate, or re-assign the issue to the actual developer. And all this could be done with a issue -tracking tool (like JIRA, Redmine etc) without enormous spamming.

D1 – 30 Days Of Testing

After 45 pages I still feel like this could be applied somehow to my current workplace. There’s currently roughly a 1:15 ratio of testers vs. developers. Nice, neat, challenging and an awesome place to work 😀




Joining in (30 Day Testing Challenge)

I know. I’m late, but I have an excuse (as always).

The Ministry Of Testing released a 30 Day Testing Challenge in June 2016. Basically challenge is a list of 30 +1 different test -related actions you should’ve done during July. Unfortunately (or luckily, depending on how you look at things) I was on vacation half of teh time, so I figured I’d have it a go on August instead. And since now it’s August 1st, I assume it’s a good day to start.

In the original post it was somehow implied that the challenge should be done during one calendar month. Since I already broke the initial plan to have the challenge done during July, I might as well ruin it otherwise, too.

My plan is simple. I’m going to do the challenge, in 30 + 1 workdays . This means I’m going to have 6 weeks on doing the challenge. Plus one external day for the last one. I suppose that can be accepted even by the ministry clerks 😀


Features of a (good) test environment

Easy to deliver

The test environment should be able to be delivered by request with as little input as humanly possible. A click of a button, a SMS sent, a commit to VCS should be able to act as a trigger.


The test environment should be as stead and as reliable as possible so that we can rely on that the failing tests are not caused by faults in the test environment

Open Source

I do see that the benefits of open source tools and projects are way more beneficiary than what you get from plain commercial tools. Even though you might end up paying for the support and building in the knowledge, it still pays off in the end.

Easy to Set-up

The test environment should be able to be set up by just pressing a button. If external information is needed a set of variables should be able to entered automatically.

Easy to Reset

You should be able to reset the test environment to a desired level of functionality at any point of time in order to re-execute the failed tests. Furthermore, it would be great to have a possibility to automatically run deeper analytical tests automatically in case of failure.

Easy to Monitor

Test environment should provide an interface where you can easily see the status of the tests, historical metrics and also the status of the test environment.

Provides test data

Test environment should contain test data generator which could by request fill in the databases with relevant test data. It should contain a simple interface in order to receive and response with the requested test data. The test data generator should not affect to the performance of the actually tested item.

Provides test results and metrics

Test environment should be able to provide the status of itself. And since it consists of the whole system plus the underlying parts, it should be able to deliver the status as effortless to the users and stackholders as possible.

You could, for example have a status interface. An interface that provides the status of the test environment in a single view.

There should be an interface – a web interface would nowadays be enough – that gives you all the relevant information about the tested items. And all this in a glimpse.

For the event monitoring, you could use ELK (Elasticsearch, Logstash & Kibana).

For the test monitoring, you could use the wall display on Jenkins. That at least shows the status of the executed tests. What I am missing from there is the status of individual test cases instead. One way to get them would be to build the tests in Jenkins per test case and add separate views per test suite.

But even that would tell the status only per test suite. What if you had several test suites in the project and 4/5 of them would be executed, 1/5 would be skipped and 10% of the executed test cases would have failed. How to display that?

Above those you would need to have to know the other metrics; load test results, combined result of different test suites, status of the all available test suites and test cases combined with the execution of the test cases today. The list seems to be endless here.

And all in a single look.

Or would it be enough to just have one page that indicates the status of the test environment together with the status of the test results. With two different indicators.

But what should the indicators then be? Traffic lights? Metrics? Curves? Flowcharts? Or combination of them?

What I’d like to know here is if there already exists such a system, or should I start to create that myself? Not that I haven’t been reinventing the wheel before (trust me, I have created my share of 3-sided ones), but it might help if I’d knew I wouldn’t have to.

Cross platform rant

I just can’t get it; How come it seems to be too difficult for developers provide a cross platform functionality that actually works? I mean come on you there. I’ve been learning to use this Robot Framework as the starting point for my own private ATDD -project (Called Marvin, due to Hitchhikers Guide to Galaxy, of course). This morning I started to create more test cases in order to start (later on) the development of the new log in -features. And I thought it would be a good use for my work laptop. Really, it has 16 GB of RAM, a processor and a hard drive. The only unfortunate thing about it is that it uses Windows 7. Due to reasons not quite clear to me, to be honest. It seems to have something to do with the IT -departments capabilities on monitoring and updating the end -users laptops and most likely the ability to remote -reset the hard drive in case the laptop is stolen. Or lost. Apparently the solution used there, in the wide America, is not flexible enough to be used in the real world. Well, now I’m just being nasty here, but still, we do develop stuff that runs above Linux and the development is done on windows. This of course applies to testing, too. Sigh.

Anyhow, everything worked just fine in the beginning. I did install the RobotFramework -eclipse -plugin and all. Ended up installing Robot Framework and its Selenium2Webdriver -library, too. Like I did last week on the Linux- & OSX- laptops. Like said, everything was fine. Until I had to actually execute the tests I had created.

I just can’t get the ancient profile -thinking Firefox keeps having on windows. Really. There’s no point of that. At the moment it actually really just slows down the development. And of course gains my frustration and gets me writing this blog -entry (which is not that bad thin, though). I tried to follow the instructions, too. Really guys, you who develop the Robot Framework, you could create a decent entry on how to configure Firefox and webdriver in Windows in order to get the test cases executed. All I found was Python Webdriver -instructions, which I have been using previously myself. Those do work. But there was no single entry anywhere that pointed to a working version of the setup.

So here I am; Browsing the web on my Firefox running on my ancient laptop that runs Ubuntu 14.04 LTS (With encrypted HardDrive, by the way) and getting ready to install the RobotFramework Eclipse -plugin to LiClipse pulling the latest changes from GitHub and executing the actual tests. In windows that was impossible (well ok, just NOT worth the effort), in Linux, it should not take more than 30 minutes for me to get a decent FAIL on the first test cases.


Apparently it never is enough. Nothing is. Man born a hoarder is a hoarder, even during the nights or weekends. The last 2-3 weeks have been kind of challenging mayhem at work. I have to admit that my capability in adopting to varying situations has been tested. Several times. Long days, tough days. No time to reset, besides in the evenings at home or at the gym. Mostly both. Luckily training the unknown when following the CrossFit helps me mentally to get used to attacking the unknown. That’s a blessing. But even though it has been rough, it surely seems I haven’t been capable to do enough on the creative side.

I’ve been bouncing this idea of web -service that could help me as a tester. And since nobody is going to do it for me – partly due to my lack of capability to describe it properly and inthusiastically enough, partly due to the fact that every single other one at the work is evenly loaded – I’ll need to do it myself.

What I have planned is to do the whole stuff with Open Source SW, using Python, Django and git. Cucumber has been fetched and installed, as well as RobotFrameWork. Those are the tools for me to use.

Developing is a creative process and I’m pretty sure I’ll learn a lot on the way. I’m going to tackle this from the ATDD (Acceptance Test Driven Development) point of view. Hopefully I learn new stuff for testing as well as the crucial stuff for automating my tests and using development skills as tools for my work as a tester.

Now, since this is going to be part of my work, it is going to be a system test support tool, I cannot release anything (at least as it is) on my GitHub. I do have plans, though, to put something there, but as stripped version.

So far the project has been setting up stuff: Yesterday, a fresh install of Ubuntu 14.04 to my Dell Latitude, today getting the tools installed and reading the first introduction chapter for Git -usage. Tomorrow I should be good to go.

And for the record, I don’t except things being too ready too soon. This is going to be done for my education and amusement only. Status updates might follow every now and then anyhow 😀