Coding, Tech and Developers Blog
Now that we've covered quite a few topics regarding messaging, it's about time we take a look at a way to monitor our services and our communication between them. And even if you're not into messaging itself, this post might be for you. Let's visit Prometheus and Grafana.
Together, we went through a few use cases and advanced patterns when it comes to integrating messaging concepts into an ASP.NET core application, especially when it comes to MassTransit. Maybe you've played around with the sample applications, or you are already using a messaging solution in your code. Whatever the case - there will be a time when you'll actually want to use the stuff in a productive environment. In this article, I want to make sure that it is not too late for you to think about a proper monitoring solution for your app.
From the docs MassTransit supports different ways to achieve some kind of observability out of the box. Apart from Open Telemetry and Application Insights, Prometheus is one of the easiest options to set up and yet highly customizable to your needs. We will take a look at setting this up locally using docker containers.
First, we'll need to include two Nuget packages into our solution, i.e., in our WebAPI project:
dotnet add package prometheus-net
dotnet add package MassTransit.Prometheus
These commands will install both the actual Prometheus code and the integration into MassTransit.
Next, in the setup code for MassTransit, we need to set up the transport to observe the bus metrics:
busConfigurator.UsingInMemory((context, configurator) => configurator.UsePrometheusMetrics(serviceName: "my-service"));
Note that I am using an in-memory transport here. This could well be Amazon SQS or any other transport.
Lastly, we need to actually map a new endpoint to the Prometheus metrics:
app.UseEndpoints(endpoints => endpoints.MapMetrics());
And that is basically it.
As soon as you start your application and navigate to the newly created endpoint, which is, in my case http://localhost:5160/metrics
you'll see a ton of informative variables at your disposal. Below is just an excerpt:
# TYPE mt_consumer_in_progress gauge
mt_consumer_in_progress{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer"} 10
# HELP mt_delivery_duration_seconds Elapsed time between when the message was sent and when it was consumed, in seconds.
# TYPE mt_delivery_duration_seconds histogram
mt_delivery_duration_seconds_sum{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer"} 0.0316236
mt_delivery_duration_seconds_count{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer"} 1
mt_delivery_duration_seconds_bucket{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer",le="180"} 1
mt_delivery_duration_seconds_bucket{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer",le="240"} 1
mt_delivery_duration_seconds_bucket{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer",le="300"} 1
mt_delivery_duration_seconds_bucket{service_name="my-service",message_type="HeartBeat",consumer_type="HeartBeatConsumer",le="+Inf"} 1
# HELP dotnet_total_memory_bytes Total known allocated memory
# TYPE dotnet_total_memory_bytes gauge
dotnet_total_memory_bytes 6119128
Each refresh will update these numbers. They are now ready to be consumed by any data scraper.
Now, in your solution folder, create the following files and folders:
./
- grafana/
- config.monitoring
- prometheus/
- prometheus.yml
- docker-compose.yml
Let's start with the config for Prometheus, prometheus.yml
. It should contain the following lines:
global:
scrape_interval: 10s
external_labels:
monitor: 'masstransit'
scrape_configs:
- job_name: 'masstransit'
scheme: http
enable_http2: false
static_configs:
- targets: ['host.docker.internal:5160']
- job_name: 'prometheus'
enable_http2: false
static_configs:
- targets: ['localhost:9090']
This is the scraping configuration for our Prometheus instance. We are configuring it to scrape two different endpoints. One is Prometheus itself, which will be available for us on localhost:9090
later on. The other endpoint is our application endpoint, which we can only access through the host machine's network host
.docker.internaland, in my case, on port
5160`. Also, we are telling Prometheus to scrape the endpoints every 10s.
Next, we need to fill the docker-compose.yml
file. For now, we will only introduce the necessary stuff for Prometheus.
version: '3.7'
volumes:
prometheus_data: {}
networks:
back-tier:
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- 9090:9090
networks:
- back-tier
restart: always
In this docker-compose
file, we are simply setting up a simple service based on the latest Prometheus image and configuring it to use the config file that we created previously.
We are now ready to go - navigate to this directory from within a terminal of your choice and docker compose up -d
.
With that in place, make sure that your web application is running and navigate to localhost:9090
in your browser. You will be greeted with this page:
So far, that is pretty empty, but we can already query the data scraper for the Prometheus metrics we saw on the metrics
endpoint of our application. If, e.g., we insert mt_consume_total
in the query field and hit Graph
, you should see something like this:
So that's already pretty useful. You can query for all of the variables you saw in the metrics endpoint. But let's take this a bit further: What if we want to have automated dashboards to visualize different metrics for us? Well, Grafana's got you covered.
The first thing we'll need is to edit the config.monitoring
file in the grafana
folder. It is only meant to contain the following lines:
GF_SECURITY_ADMIN_PASSWORD=foobar
GF_USERS_ALLOW_SIGN_UP=false
Then, we need to extend our docker-compose.yml
to include Grafana as a second service, like so:
version: '3.7'
volumes:
prometheus_data: {}
grafana_data: {}
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- 9090:9090
networks:
- back-tier
restart: always
grafana:
image: grafana/grafana
user: "472"
depends_on:
- prometheus
ports:
- 3000:3000
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
restart: always
Again, run docker compose up -d
to provision the new service stack. Then, navigate to localhost:3000
in your browser.
On the login page, enter the credentials admin
and foobar
as password (this comes from the config.monitoring
file. Here we are, welcome to Grafana.
At this point, of course, nothing has been set up yes, but this can all be done from within the Grafana UI. The most important step is to set up the Prometheus instance as a data source for Grafana. Head over to the configuration section:
and hit Add new data source
. Now, choose Prometheus from the list and set the correct host information. Again, since Grafana is running inside an isolated docker network, we need to reference host URLs like so:
When you are done, hit save and test. With that succeeded, we are actually ready to create dashboards for our application(s).
Let's create a simple visualization. From the menu, head over to the Dashboards
section and create a new one, then create a new panel. The following configuration page can look a bit daunting at first, but it'll get easier once you learn what to look for. Let's configure a query for the number of consumers in progress:
Save that panel and the dashboard, and maybe add a few more. If your applications are running for some time, you might end up with something like this:
And that is only the basic stuff. Grafana, being not only a visualizer but a data analytics tool, contains a ton of operations that you can apply to any query to visualize just the data you need in a way that is easily comprehensible. And quite frankly, that is an art in itself.
In this beginner's tutorial, we explored how to set up Prometheus and Grafana using Docker to visualize an exemplary application that uses MassTransit. Both of these tools are very powerful so it is absolutely recommended to take a look at some of the more advanced tutorials out there. If you are looking for an analysis toolset for your applications, this might be just the thing you'll need. Feel free to have a look at the full sample code available on GitHub.
Be the first to know when a new post was released
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.