Day 4 (of 30 Days of Automation in Testing)

Day 4: What kind of testing can automation support you with?

Unit testing comes to my mind as the first one.It helps me as a tester in a way that I let developers do the deeds and learn to trust they do things right. Even reviewing the unit test code is actually a neat learning experience, once you get to understand the unit test logic.

Functional testing during or right after the build, can also be done by same unit testing approaches, which is also a neat way to verify the functionality. The benefits on that one is that the code is readable by the developer who wrote it and (in case it’s not made with Perl) even with other members of the team or company. To have the tests usable and readable in the same IDE developers use, leads also to easier debugging. What usually lacks is decent reporting, but then again, all we need to know that it fails when it should and passes when everything really is OK.

Mocking the test environment and harnessing the SUT. They walk hand in hand. There’s plenty of nice mocking tools, wiremock for example. One might also re-invent the wheel here. I mean the mock can be just a file you fetch from another system or write yourself. Does not sound too much automating, though 😀

Load testing is also one of the standard ones to be used with automation. Given the powers of the servers, new container technologies and distributed load testing services, you can build your own solution or use a ready made service, like blazemeter or octoperf.

Repetitive functionality tests, regression tests and release tests in any step of the development process are good to automate, given that you are going to use the automation more than few times.

Then the whole world of Continuous Delivery and Continuous Integration. It will not happen properly if you do not have the right tests put in right places. Well it might be OK to let the users test the product, but that would mean that you should have an automated monitoring system that gathers the data you will need. That’s also one way of automation that will help.

And of course all of the test data generating tools, self-made or otherwise, bash- or python -scripts for setting up the environments together with containers and their managing tools (docker, docker-compose and kubernetes to start with).

I most likely am missing something here, still. I mean where do you draw the line when it comes to automation in testing? Is it only the testing tools (like jMeter, Robot Framework etc.) or the tools that let you yourself automate mundane tasks to ease your testing process.

 

 

Advertisement

30 Days of Automation in Testing

Once again it is summer, once again Ministry of Testing comes up with interesting 30 days challenge.
This year I spent my vacation already in May, so that excuse for not doing this won’t do; I’ll have to tackle this one.
The challenge is about test automation, sponsored by SmartBear (I do not like their products, but what can you do except not to use them 😉 ).

Day 1: Look up some definitions for ‘Automation’, compare them against definitions for ‘Test Automation’

Whereas automation definition in Wikipedia (the source of the sources) is actually handling a wide range of different mechanical and programmatical approaches, Test Automation is considered only to be software related and listed as part of software development. I do consider that one could test things mechanically and control the checkers, tools and results by machine, at least when it comes to simple checks. Wherever you’ll need human interpretation, it is actually easier to get a fellow human to make a judgement. Machines are good to make repetitive tasks easy, but they are also well capable to repeat an erroneus step several times automatically, unless someone controls the situation.
But anyhow, the biggest difference between automation and test automation according to wikipedia is the narrowing of test automation on something that’s done with software.

Day 2: Begin reading an automation related book and share something you’ve learnt by day 30.

Agile Automation and Unified Functional Testing by Rajeev Gupta

I’m also trying to make it through this course:
REST API Automation testing from scratch-(REST Assured java)

Day 3: Explore the automation thread on The Club and contribute to the conversation

Added my contribution to https://club.ministryoftesting.com/t/when-to-start-automation/11743/10

Features of a (good) test environment

Easy to deliver

The test environment should be able to be delivered by request with as little input as humanly possible. A click of a button, a SMS sent, a commit to VCS should be able to act as a trigger.

Steady

The test environment should be as stead and as reliable as possible so that we can rely on that the failing tests are not caused by faults in the test environment

Open Source

I do see that the benefits of open source tools and projects are way more beneficiary than what you get from plain commercial tools. Even though you might end up paying for the support and building in the knowledge, it still pays off in the end.

Easy to Set-up

The test environment should be able to be set up by just pressing a button. If external information is needed a set of variables should be able to entered automatically.

Easy to Reset

You should be able to reset the test environment to a desired level of functionality at any point of time in order to re-execute the failed tests. Furthermore, it would be great to have a possibility to automatically run deeper analytical tests automatically in case of failure.

Easy to Monitor

Test environment should provide an interface where you can easily see the status of the tests, historical metrics and also the status of the test environment.

Provides test data

Test environment should contain test data generator which could by request fill in the databases with relevant test data. It should contain a simple interface in order to receive and response with the requested test data. The test data generator should not affect to the performance of the actually tested item.

Provides test results and metrics

Test environment should be able to provide the status of itself. And since it consists of the whole system plus the underlying parts, it should be able to deliver the status as effortless to the users and stackholders as possible.

You could, for example have a status interface. An interface that provides the status of the test environment in a single view.

There should be an interface – a web interface would nowadays be enough – that gives you all the relevant information about the tested items. And all this in a glimpse.

For the event monitoring, you could use ELK (Elasticsearch, Logstash & Kibana).

For the test monitoring, you could use the wall display on Jenkins. That at least shows the status of the executed tests. What I am missing from there is the status of individual test cases instead. One way to get them would be to build the tests in Jenkins per test case and add separate views per test suite.

But even that would tell the status only per test suite. What if you had several test suites in the project and 4/5 of them would be executed, 1/5 would be skipped and 10% of the executed test cases would have failed. How to display that?

Above those you would need to have to know the other metrics; load test results, combined result of different test suites, status of the all available test suites and test cases combined with the execution of the test cases today. The list seems to be endless here.

And all in a single look.

Or would it be enough to just have one page that indicates the status of the test environment together with the status of the test results. With two different indicators.

But what should the indicators then be? Traffic lights? Metrics? Curves? Flowcharts? Or combination of them?

What I’d like to know here is if there already exists such a system, or should I start to create that myself? Not that I haven’t been reinventing the wheel before (trust me, I have created my share of 3-sided ones), but it might help if I’d knew I wouldn’t have to.