Running webpagetest from CLI

Sometimes you have a need to run web page load speed test without an UI. And of course, as per usual, there’s a node module for that.

I’ve been checking out https://www.webpagetest.org/ and they do provide a web interface, and an api, as well https://sites.google.com/a/webpagetest.org/docs/advanced-features/webpagetest-restful-apis.

Besides those two sources, there’s also a node module for running the tests:

https://github.com/marcelduran/webpagetest-api

Here’s how to get it working (running on macOS at the moment).

  1. git clone git@github.com:marcelduran/webpagetest-api.git
  2. cd webpagetest-api
  3. docker build -t webpagetest-api .
  4. Get the API Key from webpagetest.org/getkey.php
  5. Run the test: docker run -it –rm webpagetest-api -k YOURAPIKEY test https://twitter.com/marcelduran
{
  "statusCode": 200,
  "statusText": "Ok",
  "data": {
    "testId": "190206_19_f4ed62b215aa61ed44085cdf0cac1779",
    "ownerKey": "c65d1fd5da5233aada1c4b1064bf39d05b4a81c4",
    "jsonUrl": "https://www.webpagetest.org/jsonResult.php?test=190206_19_f4ed62b215aa61ed44085cdf0cac1779",
    "xmlUrl": "https://www.webpagetest.org/xmlResult/190206_19_f4ed62b215aa61ed44085cdf0cac1779/",
    "userUrl": "https://www.webpagetest.org/result/190206_19_f4ed62b215aa61ed44085cdf0cac1779/",
    "summaryCSV": "https://www.webpagetest.org/result/190206_19_f4ed62b215aa61ed44085cdf0cac1779/page_data.csv",
    "detailCSV": "https://www.webpagetest.org/result/190206_19_f4ed62b215aa61ed44085cdf0cac1779/requests.csv"
  }
Advertisement

Report Portal Installation

Introduction

We do have a need to analyse the stability and usability of our tests, too. We do monitor our systems from here and there, check the code coverage of /development code/. Funny when you think about it, but test automation is part of development process and should be addressed as such. Even more if you’re developer who runs the code.
I have been running all sort of analytics based on test executions in Jenkins, written parsers in python and building dashboards on ELK -stack. And boy it has been fun 😀
Then I heard about this tool, Report Portal (http://reportportal.io) from a podcast from Joe Colantonio and decided to take a look. I am nowhere near to make any conclusions, but after a brief usage it does look promising.

Running locally

Report Portal has a demo environment you can try with your tests, but it really comes in handy if you host it yourself. And they provide instructions to that in http://reportportal.io/download. I have followed the instructions and listing them down in here. To be honest, I do think they do brilliant job already and my notes here are more for myself. To recall the process in a better way and sure, and to get some clicks on my blog, too 😀
First of all, you’ll need to have `docker`installed. After that everything should be easy.
NOTE:  In MacOs (at least) you should allocate at least 5 GB memory to the docker process. I had only 2 GB and everything was looking good from the process side, except that I could not log in. I do recall running the same in Linux without changing anything and it just worked.
  •  mkdir ~/reportportal
  • cd ~/reportportal
  • Download the docker-compose -file:wget https://github.com/reportportal/reportportal/raw/master/docker-compose.yml
  • mkdir -p data/elasticsearch 
  • chmod g+rwx data/elasticsearch
  • sudo chgrp 1000 data/elasticsearch (here’s a difference. I had to run the chgrp in MacOS as sudo)
  • docker-compose -p reportportal up -d --force-recreate
When the compose is ready, you should be able to browse to the Report Portal UI: http://localhost:8080/ui/
Log in using default/1q2w3e.

Robot Framework integration

In order to get the Robot Framework results to report portal, you need to use a separate library.  GitHub – reportportal/agent-Python-RobotFramework. I will create a new blog post on this and add more information there.

jmeter/Blazemeter integration

Basically all we have to do is to generate a junit.xml -report from the test run, zip it and then send it to the report portal API. I will create a new blog post on this and add more information there.

jest-puppeteer integration

GitHub – reportportal/agent-js-jest: ReportPortal agent for the Javascript Jest unit test framework. I will create a new blog post on this and add more information there.

There has been Robots (TM)

 

We did have the (I suppose it was) first Robot Framework -related MeetUp in Sweden yesterday evening. It was held at the premises of Fareoffice Car Rental Solutions Ab in Kungsholmen.

There was not plenty of participants, but there was enough. The company and the office are small, so it was actually a good thing to have a ‘lagom’ crowd. Which in this case was 9 from outside the company, me, Pekka Klärck and 4 from Fareoffice.

First Pekka gave an introduction and background talk about Robot Framework. It was actually good to watch (for me), for I learned a few new things, once again. Even though he had held almost the same speech during the day before, while having a Robot Framework Workshop/training to Fareoffice, there still was few new issues to cover. Besides that we were talking about general usage of plug-ins on IDEs, RoboCon and running the tests with different setups.

The second was my trial of fire. I had a presentation about Robots in Containers. It went surprisingly smoothly. I had few technical glitches, but I knew they were there so the ghost of a demo-god did not ruin my presentation.

Of course there were few things I’d do differently. First of all, it was too quick. Secondly, I could’ve concentrated more on practical execution of the steps I was describing with  pictures; a live work is always better than preserved slide. Even if the deck of slides are be done with Prezi.

So it goes. I was tense, nervous and did all the typical flaws a Finn can do when representing and reflecting my MeetUp arrangements and presentation; I picked up all the mistakes I thought I had made. That is pretty much a built-in feature for us grown up in (the 80’s at least) Finland. Luckily, you can always trust a Swede to be there and comfort you. Empathy in Sweden is a strong and positive thing. Thank you for all participants for being there and for the support.

The best part of the MeetUp was the people and the discussions we had. It was great to see that there were others using Robot Framework here in Sweden.

I was also asked about running Selenium tests on IE/Edge, and we ended up showing the tests the way we do it; running them from Jenkins and in Browserstack. But that is not running on premise. Which meant I could not give a straight answer, which bugged me a bit, as usually. It started to feel kind of a challenge and I might want to pick it up on next creative Friday (once a month tradition at Fareoffice). So to say, spin up a Windows server and install Zalenium in it. Working with Windows would be a worthy challenge for me, an avid Linux -user as I am, and could serve as a good reminder on the fact that even the operating systems should be seen as tools. And every tool has its purpose.

In the end, we decided to create a MeetUp group and have the next meeting at Eficode’s premises in Stockholm. There was also few ideas about where to host RoboCon in 2019. All in all I am really happy I decided to push this one through, believe me, I had my doubts beforehand 😀

 

Logstash filter for Robot Framework

We are currently working on CI/CD -setup at work. As part of that, the tests need to be able to be implemented as a part of the pipeline.
Generally, the pipeline consists of steps/stages done with jenkins pipeline. The benefit on this is that the whole process and definition of the stages (Deploy, test etc) will be done by the developer team and stored in the teams own repository and is therefore controlled by the team also. Which is definitely a great step towards for the teams having more freedom and more responsibility when it comes to deliver the applications/solutions to the production. Needless to say it will also affect to the visibility of the quality and to the need of tests.

Plus that it will definitely keep the test team on their toes. Keeping ahead becomes a really neat challenge 😀

Now that does add more requirements also on the testing tools. First of all, the tools we use should be able to be used from containers. Which means that everything is dockerized. Well, the test code itself is in the repository, but the engines running the tests are in the containers.
We use, whenever we can, a general docker images from dockerhub.
Sometimes it won’t work like that. So we end up re-inventing the wheel.

That was the case with logstash. We will need to be able to filter the Robot Framework’s output.xml and send it to elasticsearch. There was two possibilities to do that; logstash filtering or xml parsing. The xml-parsing remains to be done still (I am going to do it), but I did manage to create the logstash -filter. It is not completely flawless, not even the most elegant, but at the moment it seems to be working as it should. To be honest, I was aiming to have a one more blunt instrument for our test needs.

The filter:

robot-results.conf

input {
 file {
 path => [ "/output.xml"]
 }
}

filter {
 xml
 {
 source => "message"
 store_xml => true
 target => "doc"
 xpath =>
 [

"msg", "doc.msg",
 "arguments", "doc.args",
 "kw", "doc.keyword",
 "status", "doc.status",
 "status/@status", "doc.test.status",
 "robot", "doc.robot",
 "errors", "doc.errors",
 "statistics", "doc.statistics",
 "suite", "doc.suite",
 "tag", "doc.tag",
 "total", "doc.total",
 "/kw", "leftovers",
 "/arguments", "leftovers"

]



}
}



output {
 elasticsearch {
 hosts => ["elastic"]
 index => "logstash-%{+YYYY.MM.dd}"
 }
}

Dockerfile:

FROM logstash

ADD robot-results.conf /etc/logstash/conf.d/robot/results.conf
CMD logstash -f /etc/logstash/conf.d/robot/

Running the container:

docker run --add-host=elastic:127.0.0.1 janmat/logstash-robot

 

Run docker without sudo in Fedora 25

Sometimes things get weird. One could imagine that the documentation on docker (https://docs.docker.com/engine/installation/linux/fedora/) could be up to date. And when it comes to installation itself, it actually is.

The problem is/was that I was forced to run the docker with sudo (reasons are explained on both pages linked here, I’m not going to repeat them), and while both sites gave a solution, the docs.docker.com -instructions did not actually work. So I googled a bit more:
https://developer.fedoraproject.org/tools/docker/docker-installation.html

According to developer.fedoraproject.org, you’ll have to run the following two commands in order to get docker executed without sudoing.
Basically you’ll add a docker -group and add yourself to it.

$ sudo groupadd docker && sudo gpasswd -a ${USER} docker && sudo systemctl restart docker
 $ newgrp docker

Features of a (good) test environment

Easy to deliver

The test environment should be able to be delivered by request with as little input as humanly possible. A click of a button, a SMS sent, a commit to VCS should be able to act as a trigger.

Steady

The test environment should be as stead and as reliable as possible so that we can rely on that the failing tests are not caused by faults in the test environment

Open Source

I do see that the benefits of open source tools and projects are way more beneficiary than what you get from plain commercial tools. Even though you might end up paying for the support and building in the knowledge, it still pays off in the end.

Easy to Set-up

The test environment should be able to be set up by just pressing a button. If external information is needed a set of variables should be able to entered automatically.

Easy to Reset

You should be able to reset the test environment to a desired level of functionality at any point of time in order to re-execute the failed tests. Furthermore, it would be great to have a possibility to automatically run deeper analytical tests automatically in case of failure.

Easy to Monitor

Test environment should provide an interface where you can easily see the status of the tests, historical metrics and also the status of the test environment.

Provides test data

Test environment should contain test data generator which could by request fill in the databases with relevant test data. It should contain a simple interface in order to receive and response with the requested test data. The test data generator should not affect to the performance of the actually tested item.

Provides test results and metrics

Test environment should be able to provide the status of itself. And since it consists of the whole system plus the underlying parts, it should be able to deliver the status as effortless to the users and stackholders as possible.

You could, for example have a status interface. An interface that provides the status of the test environment in a single view.

There should be an interface – a web interface would nowadays be enough – that gives you all the relevant information about the tested items. And all this in a glimpse.

For the event monitoring, you could use ELK (Elasticsearch, Logstash & Kibana).

For the test monitoring, you could use the wall display on Jenkins. That at least shows the status of the executed tests. What I am missing from there is the status of individual test cases instead. One way to get them would be to build the tests in Jenkins per test case and add separate views per test suite.

But even that would tell the status only per test suite. What if you had several test suites in the project and 4/5 of them would be executed, 1/5 would be skipped and 10% of the executed test cases would have failed. How to display that?

Above those you would need to have to know the other metrics; load test results, combined result of different test suites, status of the all available test suites and test cases combined with the execution of the test cases today. The list seems to be endless here.

And all in a single look.

Or would it be enough to just have one page that indicates the status of the test environment together with the status of the test results. With two different indicators.

But what should the indicators then be? Traffic lights? Metrics? Curves? Flowcharts? Or combination of them?

What I’d like to know here is if there already exists such a system, or should I start to create that myself? Not that I haven’t been reinventing the wheel before (trust me, I have created my share of 3-sided ones), but it might help if I’d knew I wouldn’t have to.