Ultimate JMeter Kubernetes Starter Kit

Romain Billon
FAUN — Developer Community 🐾
8 min readMay 14, 2021

--

Few months ago, I wrote an article about a dockerized JMeter starterkit. A Github template repository that anybody can use to easily start a new performance project. The downside of docker, and docker-compose, is that it’s hosted on a single host. You can have multiple injectors for your JMeter distributed tests, but only on one server. I propose to you today, the same principle. A Github template repository, but this time with a Kubernetes approach.

Why Kubernetes? Because Kubernetes is an orchestrator, it will manage a cluster of one to many nodes in which you will be able to deploy your containerized applications. So where our problem was to be capable to deploy our JMeter injectors only inside one node, we can now deploy them much wider. As well, Kubernetes offer easy ways to dynamically deploy services at scale to answer load testing needs.

The purposes of deploying JMeter inside Kubernetes are :

Virtually unlimited amount of injectors (with node auto scaling)

Can be deploy in multiple region of the globe

Easy to deploy and scale

Can be easily deployed in ephemeral clusters

Can be deployed inside your existing Kubernetes cluster just beside your applications, avoiding internet (avoid using same nodes for app and load injectors)

Can run on all cloud providers (no vendor lockin)

Check the repo here : https://github.com/Rbillon59/jmeter-k8s-starterkit

Why should you use this starterkit?

I’ve added a bunch of highly valuable services alongside JMeter that will help you to start load testing very fast, efficiently, with a very good observability of your agents.

  • Each Kubernetes node is monitored by a Telegraf pod deploy as a DaemonSet and report all metrics inside an InfluxDb instance which centralizes all the gathered metrics.
  • Each launched JVM, JMeter or Wiremock is monitored by a Telegraf Sidecar container gathering JVM metrics through Jolokia, sending them to InfluxDb
  • InfluxDb is used as well for JMeter to store live performance test results in order to follow the ongoing load test.
  • InfluxDb database is stored in a persistent state with a volume so you should not lose any data.
  • JMeter instances are launched as Job so they complete only once, avoiding looping into the test. The used resources are automatically freed by Kubernetes while the job is completed, but I integrated a cleaner that will completely delete the job.
  • A mock service is deployed directly inside the cluster so you can focus on your targeted app and not have to manage third-party dependencies. The used mock is Wiremock because it’s an awesome tool. I wrote an article on how to use it and deploy it inside Kubernetes
  • The mock service is built on top of a horizontal pod scaler which will scale the deployment if needed.
  • A Grafana instance is deployed to graph all gathered metrics. Four dashboards have been included. For node monitoring, JVM monitoring, Kubernetes monitoring, and JMeter live reporting.

How to use this starter kit?

Step 1: Write a JMeter scenario!

Let’s say, I just deployed an application into the serverless environment of Scaleway. Woooo!

Serverless environment deployed on Scaleway

The deployed application is an API that answers on two endpoints: /static and /dynamic/whatever

Let’s open a JMeter.

First, we will create a new User Defined Variables where we will centralize the JMeter properties we will pass at runtime with the .env. With some default values

User defined variables with runtime properties

Then put a request default to avoid setting variables every time we add an http sampler :

http request default centralizing the endpoint configuration
Http samplers from JMeter

Add two requests attacking the serverless application on /static and /dynamic/…

Add as well a request that should have reached a third party dependency but instead will use the embedded mock service

JMeter HTTP sampler definition to the embedded mock service of Kubernetes

Now let’s add a bit of live monitoring to see what’s is going on during the test. Add a Backend Listener in JMeter and set the influxDb URL as the following (to use the one in the cluster) : http://influxdb:8086/write?db=telegraf

InfluxDb listener in JMeter configured to use the embedded Kubernetes service

Finally, setting the throughput by threads to 10 requests per seconds

Ok that’s it, save your scenario under the name k8s-load-test.jmx

The overall scenario should look like this :

JMeter complete example scenario

Step 2 : Create a repository from the template and put your JMX inside

Head to the starter kit repository and hit the “Use this template” button. Create a repo from it and clone it locally. Now create a folder inside the scenario folder called k8s-load-test.

The folder name must be equal to the JMX name without the .jmx extension

The file tree of the repo is the following :

+-- scenario
| +-- dataset
| +-- module
| +-- k8s-load-test
| +-- k8s-load-test.jmx
| +-- .env

Update the .env accordingly with the parameters of your choice.

Step 3: Deploy JMeter to your kubernetes cluster

I’ll assume you’ve a working Kubernetes cluster, a working kubectl and the context set.

At the root of the repository, you have a folder named k8s which contains everything to deploy inside the Kubernetes cluster. If you want to change the used JMeter version. Open both files k8s/jmeter/jmeter-master.yaml and jmeter-slave.yaml with your favorite editor and change the JMeter image tag by the JMeter version you want to use. i.e rbillon59/jmeter-k8s-base:5.4.1 is the JMeter version 5.4.1 inside the docker image. (To this day, the oldest available version is 5.2.1, but if needed, just ask in the comments, I just need to push a git tag to trigger the build of another version).

Now deploy your stack inside your Kubernetes cluster. At the root of the repository, run the following command :

kubectl create -R -f k8s/

This will deploy the entire stack.

The principal process of both master and slave is to wait indefinitely. Waiting for commands.

Step 4 : Launch the test

Again, at the root of the repository, write :

./start_test.sh -n default -j k8s-load-test.jmx -i 20

For information, the script usage is :

-j <filename.jmx>
-n <namespace>
-c flag to split and copy csv if you use csv in your test
-m flag to copy fragmented jmx present in scenario/project/module if you use include controller and external test fragment
-i <injectorNumber> to scale slaves pods to the desired number of JMeter injectors
-r flag to enable report generation at the end of the test

What is the script process :

  • Delete the JMeter master and slave job and create it again. It’s to get rid of the completed job if the cleaner hasn’t been trigger yet.
  • Then will set the slave job parallelism to the desired number of injectors.

With the node auto scaling feature, available in most cloud providers, when setting the parallelism number, the cluster will size automatically to match the requested ressources.

Kubernetes pool with 1 node
Kubernetes pool scaling to the 11 requested nodes
  • And wait until the pods are Running
  • Then manage CSV files splitting and upload, as well as the JMeter modules if the flags are provided.
  • Uploading the scenario to the slave pods
  • Installing the needed plugin depending on the passed JMX needs. This operation is done in parallel for each slave pod to avoid taking too much time at test run.
  • The master script will now be uploaded to the master pod and executed.
  • This will wait for all slaves to have their port 1099/tcp to be listening
Waiting for slaves
  • When all pods are ready, the JMeter master process will be launched

So we just started a test with our JMeter scenario with 20 injectors!

Step 5 : Monitor the test

In your favorite browser tool, you can now open the Grafana instance through the External service Url get from the command :

kubectl get services

and find the grafana line and EXTERNAL-IP. You can connect to this address to see how your test is going.

If you are testing without load balancer service, or locally, you can use this command, then pointing your browser to http://localhost:3000

kubectl port-forward <grafana_pod_id> 3000

The default login is admin and the password is XhXUdmQ576H6e7

Response time over time from Grafana

The autoscaling feature of the Scaleway Element serverless platform is very good. Even if still in beta, the response time are stable after the scaling.

But the mock response time is really bad inside the cluster :

Calls to the mock inside the cluster

Let’s dig inside the monitoring : open the Jolokia 2 Dasboard and filter on podname of wiremock. Aaaand that’s it. Just look at the GC time!

GC time of the mock inside the cluster
millicores consuming of wiremock service

As well, it’s consuming 255,9 millicores out of the 256 provided.

Let’s pimp a bit this service. Upgrading memory and cpu to 1G and 1024 millicores.

Response time of the mock service

Ah much better. This example showed that having good monitoring is MANDATORY to begin performance testing. With this stack, you have the injection part and mock observable.

Conclusion

That’s it ! You know how to use this starter kit to set up your load tests faster. By the way, this starter kit is completely compatible with most of the load testing SaaS solutions, like Octoperf or Blazemeter. Meaning that you can just import the JMX you made without any modification to be compatible with the SaaS solutions.

If you like my work and if you are interested in performance testing, DevOps, Kubernetes, and the cloud technologies feel free to share and subscribe!

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--