Easy to deliver
The test environment should be able to be delivered by request with as little input as humanly possible. A click of a button, a SMS sent, a commit to VCS should be able to act as a trigger.
The test environment should be as stead and as reliable as possible so that we can rely on that the failing tests are not caused by faults in the test environment
I do see that the benefits of open source tools and projects are way more beneficiary than what you get from plain commercial tools. Even though you might end up paying for the support and building in the knowledge, it still pays off in the end.
Easy to Set-up
The test environment should be able to be set up by just pressing a button. If external information is needed a set of variables should be able to entered automatically.
Easy to Reset
You should be able to reset the test environment to a desired level of functionality at any point of time in order to re-execute the failed tests. Furthermore, it would be great to have a possibility to automatically run deeper analytical tests automatically in case of failure.
Easy to Monitor
Test environment should provide an interface where you can easily see the status of the tests, historical metrics and also the status of the test environment.
Provides test data
Test environment should contain test data generator which could by request fill in the databases with relevant test data. It should contain a simple interface in order to receive and response with the requested test data. The test data generator should not affect to the performance of the actually tested item.
Provides test results and metrics
Test environment should be able to provide the status of itself. And since it consists of the whole system plus the underlying parts, it should be able to deliver the status as effortless to the users and stackholders as possible.
You could, for example have a status interface. An interface that provides the status of the test environment in a single view.
There should be an interface – a web interface would nowadays be enough – that gives you all the relevant information about the tested items. And all this in a glimpse.
For the event monitoring, you could use ELK (Elasticsearch, Logstash & Kibana).
For the test monitoring, you could use the wall display on Jenkins. That at least shows the status of the executed tests. What I am missing from there is the status of individual test cases instead. One way to get them would be to build the tests in Jenkins per test case and add separate views per test suite.
But even that would tell the status only per test suite. What if you had several test suites in the project and 4/5 of them would be executed, 1/5 would be skipped and 10% of the executed test cases would have failed. How to display that?
Above those you would need to have to know the other metrics; load test results, combined result of different test suites, status of the all available test suites and test cases combined with the execution of the test cases today. The list seems to be endless here.
And all in a single look.
Or would it be enough to just have one page that indicates the status of the test environment together with the status of the test results. With two different indicators.
But what should the indicators then be? Traffic lights? Metrics? Curves? Flowcharts? Or combination of them?
What I’d like to know here is if there already exists such a system, or should I start to create that myself? Not that I haven’t been reinventing the wheel before (trust me, I have created my share of 3-sided ones), but it might help if I’d knew I wouldn’t have to.