Testing in popular culture

This morning, while browsing for a podcast for this 30 Days Of testing -challenge, I found myself thinking. Yes I know, it’s a harsh condition and I try to avoid it regularly, but one can’t help oneself. Not when you’re born this way, you know, with brains and all.

Anyhow, I started thinking, who was the test engineer in Starfleet Academy that tested Kobayashi Maru? Clearly, for it being a computer scenario, it should’ve been tested. How otherwise could’ve James T. Kirk beaten it, even by cheating? And furthermore, in the ship, there’s only operators available, mainly. Someone must have been done a hell of a coding in order to get the USS Enterprise to get around the orbit in the first place. And, as we all know, when there’s a developer there should be a tester available.

Which brings me to this: Is there testing involved in popular culture? In the ‘Saving Matt Damon’ – movie, The Martian (which by the way is a great book, not as good movie, Ridley Scott blew it), NASA does skip the tests, based on risk assessment, apparently,  in order to send the supplies to the Mars a bit more earlier, even the length of the tests is slightly discussed. That’s most likely due to that the author of the book is, if I recall correctly, a SW engineer.

But is there more QA/Test references in popular culture? If not, why not? We’re working on a field of SW (Well, HW needs to be tested, too, but that’s another story) and the field gets bigger and bigger all the time. It is clearly so, nowadays, that hackers and developers can actually tell people what their profession is and people in general have some sort of clue what they are doing for living.

Me, in the other hand, if I tell my profession to my relatives, am faced with a puzzled smile and slightly confused glare. Which I can buy, nobody seems to know what this testing is and what it is all about. Getting more testing stories in popular culture would actually help a bit.

It would be interesting to know, if I’m wrong here. What I know, is the narrow field of popular culture I’ve been following. I might be completely wrong, which is always all so human.

By the way, did you know that the Chernobyl disaster  was caused by running tests? Some experiments, as it seems. Sounds slightly like exploratory testing to me. It’s always a refreshing thought when someone bashes around the nuclear plant systems.

Just one more: Who was the guy, who tested the Death Star particle exhaust vents security? And who approved the solution?

Note: And the answer comes from the  deeps of Twitter:

Thanks, @Marcel_Gehlen

PS. I did find the podcast, too.

Advertisement

Notes & reflections on ‘How Google Tests Software’

‘The mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated.’
 
This is pretty much the approach we should have at FO. And luckily, we do. Either unit- or functionality tests are automated, when appropriate, the rest is tested manually, using exploratory testing.
The exploratory testing we do has at least one flaw. We’re missing a way to record and replay the tests in general (Widget has it’s own record/play tool, which is ok). There is tools for that, but currently we miss the practical implementation of them.
This reminds me also that the test cases need to be reviewed by the developer and by the product owner (requirement responsible) before they are automated. In here, the TDD approach would help a lot. It would not hurt to review them afterwards either.
In the current testing culture we have, the problem I see is that people are only interested that the delivery is done. And with delivery, it is mainly the development effort that is counted in. The tests are done, if they are, late. usually too late. Sometimes after the release (as what I am testing now). Then we’re also missing a review or walk through of the test results. There seems to be a slight lack of interest in results of tests per release, the test reports at the moment are not written, there’s mainly no one to read them either. Failing test cases are spotted too late, so that the pressure to push the release to production is so big that the results are, if not ignored, at least overlooked and set aside for later look. Mainly the later look rarely happens.
Now, I could go on and complain, nag and mumble about this. But I do think there could be a better solution for this.
What we need to have is a knowledge that the test results exist. that’s the first one. Well, actually, the results and reports need to be written down at first place. Secondly, we need to walk them through. And the third one is to do this in the right time.
Now, I am both fortunate and unfortunate to be (currently) the only test engineer at the company. That being said, there’s plenty of testing going on and done by the developers, automated and manual. Being the single resource for testing means that I am actually quite busy, all the time but it also means that I am not tied to the projects, I cannot be, I need to be flexibly moving around all the time. Now, this gives me slight freedom when it comes to releases. I do have time to give the focus to the results and reports. What I could do is to start to show the results and store the reports actively, all the time, with the active projects we have. All I need is a platform to do that. And I think the Monday stand-up would be a great  place to start.
In this case the reports or the active projects won’t actually matter. What matters is to show what we’ve found. And after awhile, I’m pretty sure, people are getting more knowledge, gathering interest, get the feeling of the quality to exist and we will eventually get more focus on testing.
‘We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors, and files bug automatically’
Now here, I could argue about the e-mail -thing. Mail has gone outdated when it comes to bug reporting and such. Sending mail is only creating clutter. I see it so that we should automatically create bug reports/issues/tickets and assign them to the person responsible for failing tests. That should be all. The test responsible should then prioritize the issue, check if it fails due to a bug/defect in the test system or test case, fix the issue if appropriate, or re-assign the issue to the actual developer. And all this could be done with a issue -tracking tool (like JIRA, Redmine etc) without enormous spamming.

D1 – 30 Days Of Testing

After 45 pages I still feel like this could be applied somehow to my current workplace. There’s currently roughly a 1:15 ratio of testers vs. developers. Nice, neat, challenging and an awesome place to work 😀

IMG_2806

 

 

Joining in (30 Day Testing Challenge)

I know. I’m late, but I have an excuse (as always).

The Ministry Of Testing released a 30 Day Testing Challenge in June 2016. Basically challenge is a list of 30 +1 different test -related actions you should’ve done during July. Unfortunately (or luckily, depending on how you look at things) I was on vacation half of teh time, so I figured I’d have it a go on August instead. And since now it’s August 1st, I assume it’s a good day to start.

In the original post it was somehow implied that the challenge should be done during one calendar month. Since I already broke the initial plan to have the challenge done during July, I might as well ruin it otherwise, too.

My plan is simple. I’m going to do the challenge, in 30 + 1 workdays . This means I’m going to have 6 weeks on doing the challenge. Plus one external day for the last one. I suppose that can be accepted even by the ministry clerks 😀

30-Day-Challenge-1

Features of a (good) test environment

Easy to deliver

The test environment should be able to be delivered by request with as little input as humanly possible. A click of a button, a SMS sent, a commit to VCS should be able to act as a trigger.

Steady

The test environment should be as stead and as reliable as possible so that we can rely on that the failing tests are not caused by faults in the test environment

Open Source

I do see that the benefits of open source tools and projects are way more beneficiary than what you get from plain commercial tools. Even though you might end up paying for the support and building in the knowledge, it still pays off in the end.

Easy to Set-up

The test environment should be able to be set up by just pressing a button. If external information is needed a set of variables should be able to entered automatically.

Easy to Reset

You should be able to reset the test environment to a desired level of functionality at any point of time in order to re-execute the failed tests. Furthermore, it would be great to have a possibility to automatically run deeper analytical tests automatically in case of failure.

Easy to Monitor

Test environment should provide an interface where you can easily see the status of the tests, historical metrics and also the status of the test environment.

Provides test data

Test environment should contain test data generator which could by request fill in the databases with relevant test data. It should contain a simple interface in order to receive and response with the requested test data. The test data generator should not affect to the performance of the actually tested item.

Provides test results and metrics

Test environment should be able to provide the status of itself. And since it consists of the whole system plus the underlying parts, it should be able to deliver the status as effortless to the users and stackholders as possible.

You could, for example have a status interface. An interface that provides the status of the test environment in a single view.

There should be an interface – a web interface would nowadays be enough – that gives you all the relevant information about the tested items. And all this in a glimpse.

For the event monitoring, you could use ELK (Elasticsearch, Logstash & Kibana).

For the test monitoring, you could use the wall display on Jenkins. That at least shows the status of the executed tests. What I am missing from there is the status of individual test cases instead. One way to get them would be to build the tests in Jenkins per test case and add separate views per test suite.

But even that would tell the status only per test suite. What if you had several test suites in the project and 4/5 of them would be executed, 1/5 would be skipped and 10% of the executed test cases would have failed. How to display that?

Above those you would need to have to know the other metrics; load test results, combined result of different test suites, status of the all available test suites and test cases combined with the execution of the test cases today. The list seems to be endless here.

And all in a single look.

Or would it be enough to just have one page that indicates the status of the test environment together with the status of the test results. With two different indicators.

But what should the indicators then be? Traffic lights? Metrics? Curves? Flowcharts? Or combination of them?

What I’d like to know here is if there already exists such a system, or should I start to create that myself? Not that I haven’t been reinventing the wheel before (trust me, I have created my share of 3-sided ones), but it might help if I’d knew I wouldn’t have to.

Delivering a Hippo

When delivering software through the delivery chain, I’d like to refer to the software as a Hippopotamus. Previously I have heard Michael Bolton (Not the singer, or the guy from Office Space) refer to software (and the delivery process) and especially requirement handling by telling a story about blind men and elephant. I do have my reasons to switch the elephant to a hippo, which I then explain later on.
If you like elephants more than hippos, too bad.

The Story

There is a group of blind men. They get a task to describe what a hippopotamus looks like. They need to give the description to a man that will build a crate to transport the hippo to the wild.
The group is first gathered around the head. They describe the beast as a big headed creature that has a huge mouth and sharp, long teeth, relatively small eyes and small, but alert ears. The creature seems to also have a slightly bad temper. They measure the size of the head and give the description to the transport team.
After the description, the transport team builds a crate and the delivery starts.
They get the whole head to the crate, it fits perfectly. But then there comes something else.
It seems they’ve forgotten something.
At this point it’s about the time for me to reveal a bit on how do I see this. First of all the group of men, the blind ones, I saw them first as bunch of software developers delivering the latest features on maintenance project. But to be fair, the group should also contain some test engineers, requirement responsible (product owner or whatever the title is), project manager and so forth. For as far as I know, regardless on the role on the project, in this kind of scenario we people tend to get blind. And it is not related on this scenario. When we as humans concentrate on the problem at hand we have great capabilities on concentrating on just that and to forget everything else. Which is somewhat crucial. It is crucial for the delivery to be delivered as it should’ve been. It should also be crucial to focus on other things than just the delivery itself.
What the group of blind men has forgotten is that the hippo has it’s body. The body is bigger and wider than the head and therefore it has no place in the crate. However, if they want to deliver the head, they should also deliver the body. So they turn back to the hippo and feel the sheer size of it’s body.
They describe the quite huge creature to the  transport team. The transport team, for they are clever, will now build a crate to contain both the head and the body. Not just the body.
So everything should now be fine, or what do you think?
The team has now delivered the complete hippo to the transport team and managed to get it inside the crate. Sure, and everybody is happy. The main functionality is delivered with the maintenance package without huge amount of flaws in it. As it usually is.
But did they forget something? No, everything is neatly packaged and ready to be delivered.
Well, the hippo has the rear end. And as you know, hippos tend to mark their territory with their rear end. They are clever in that way that they spread their manure (aka hippo -shit) around by wagging their tail vigorously. It’s an effective way to spread the message. And there we have the slip through. To be honest It wouldn’t matter if the group of men were blind or not; slip through will eventually happen every now and then to the teams that are or are not visually impaired. Really, we’ve all been there.
What we can learn from here is few things. First of all, the roles of the team should be spread a bit. There should be at least one person taking care of the maintenance and regression. Whether that is a test engineer or a developer does not actually make any greater difference. It usually is enough that you verify that he regression should not happen and that the old stuff is delivered also. The other thing to consider would be to take one of the transport team to check the delivery.
And for the love of God (or whatever deity your world goes around – money, consuming, stock exchange, sex, rock n’ roll etc), don’t get me wrong here. I do not mean that we need more people to the team. I do think that more blind men see just as much as less of them. What I mean with the role is that we, humans, are capable of switching to different roles. Even during the workday. Regardless what I think about multi-tasking (I am strongly against, really, it does not work for me), I do think that same person that handles the delivery of the main functionality can also handle te delivery and verification of the maintenance delivery. And the verification of regression tests. Does not even have to be a developer to do that. Or QA, depending on from which side you look at it. There’s plenty of things we QA -engineers can learn from the developers and developers can also learn a thing or two from testers. Some of the test engineers are even capable of writing code themselves and most of the developers are capable of testing their (or someone else’s code) during the day. And after this is done, they can continue concentrating to deliver the main functionality. It really is as simple as that. Nothing fancy here.
So, now we’ve covered the main functionality and the regression. The head and the body are happily delivered. What to do with the tail?
First of all. We will always have slip throughs. Really, there is no way we can get rid of them. There is no way we can test and code in a way that will lead to a delivery and simultaneously not have anything weird hidden in the delivery. But of course we should do anything we can to prevent them happen.
What could be helping here was that there is a role in the team for experimental testing. Do something unconventional. Someone from the team should get to bash the hell out from the application, tease the boundaries, break it, create chaos. Do things users won’t do (that list is actually shorter than you think ;)). That will, eventually reveal some of the bugs that the more conventional testing won’t do. If possible, automate that and do it all the time. Just keep the chaos running.

Installing Ubuntu 15.04 on Dell Latitude D810

I do have this old laptop. Dell Latitude D810. It is otherwise quite ok, display resolution is excellent, CPU power is enough for browsing etc. The only thing that bothers me is the lack of memory, or so to say the limits of memory you can use. 2 GB of memory is nowadays below the minimum.

However, it manages to keep my writing going on and web browsing is actually easier and more efficient than with the Asus Eee PC 1101HA, which seems to lack the rest of everything you need to work around.

Ok, enough babbling. Back to the actual topic:

Installation procedure was pretty simple (this time).

  • Download Ubuntu 15.04 (32-bit version)
  • Write the downloaded image to USB disk:
    1. Plug in the USB
    2. umount /dev/sdb (in case needed, I didn’t have to, my Arch linux did not automount the disk)
    3. ]$ dd bs=4M if=Downloads/ubuntu-15.04-desktop-i386.iso /dev/sdb
    4. Eject the USB -stick (eject /dev/sdb )
  • Plug in the USB stick to Dell Latitude D810
  • During the Start Up, press F12 and select USB device as the boot device
  • During the installation process, do not select the 3rd party software to be installed, it halts the computer.
  • If you have network cable, plug it in during the installation procedure and download the updates during the installation
  • After installation, restart the computer
  • When computer has been restarted, do the following:
    1. Open terminal (Ctrl+Alt+T)
    2. Run the following commands:
  ]$ sudo apt-get update
  ]$ sudo apt-get install firmware-b43-installer
  ]$ sudo modprobe -r b43 bcma
  ]$ sudo modprobe -r brcmsmac bcma
  ]$ sudo modprobe b43
  • Restart the computer

The information gathered above can be also found from Ubuntu WifiDocs -site. All I did was gathered it in a simplified list to be applied. Most likely the same approach should work to Debian, too.

Re-evaluating and bouncing

I managed to actually re-evaluate my plans and combine them to my skill-set. There is a slight cap there. I can now see, that the skills of my programming are not near the needs of the project. Now that means that I’ll have to build the resources.

This means, of course, that the Algorithm -training is going to be continued, but that it is not enough. In fact I decided to get back to basics and go through two topics.

  1. Programming in Python 3
  2. Twisted tutorial (and everything related)

So getting those 2 things done will, eventually, get me closer to my ultimate goal: to become an efficient technical tester (A Test Toolsmith, some might say).

Now earlier, in this kind of projects, I’ve let myself sidetrack myself. Pretty easily also. So in this case, I’ll have to learn myself some discipline and perseverance. What I mean is that no matter if I get the ideas on how to develop the project I have in mind (being the messaging answering machine or my Crossfit whiteboard), I should only write the ideas down and get back to them after I’ve finished the ‘course’.

Now the learning should not take too long. Honestly put, I have had programming education in Java and Python. Java -education took place long time ago, just before I was hired at my first Test Engineer -job. The Python training, in the other hand was only few years ago. So in that way re-learning the stuff (and adding more, the course was just a basic one) should not take too much time. And this time I just have to make it all the way. The benefit on having this done at home, as my private project, is that I can also implement my skills right away after I’ve done the learning. Lately the usage of my learned skills has been more or less nothing. I know it is matter of determination at work, too, but previous tasks at work has kept me so busy otherwise that the usage of any coding capabilities has been more or less closer to the zero. Now that also seems to be changing, hopefully for good.

Besides Python, I do have a sneaky feeling that I should learn a bit more groovy. Which actually means that I should refresh also the java -skills. Not the worst idea either. Maybe after the Python is done I can concentrate more on groovy. Besides, we do use groovy -scripting when using SoapUI at work, so that one comes also naturally by the work, too. The other day I managed to even fetch a file from FTP to my desktop. Woohoo 😀

Last, but not least is that I do still have my project plans. To create something useful by using Python, Robot Framework and ATDD -methods to test and create everything. I just need to build my wheels first. Slow and steady should eventually win the race.