This repository contains a containerized scenario to run performance experiments using Prometheus as a monitoring data source. The results extracted from the experiments will help us validate the ingestion stage implemented in our proposed monitoring data aggregation system.
The target data source for our experiments is the Prometheus TSDB. The following figure depicts an NGSI-LD information model instance of the context information associated to a Prometheus data source.
Our proposed data aggregation system is loosely-coupled from the data sources. Thus, we must implement collection data mechanisms that require no intervention from the data source owner. When it comes to Prometheus, the only way of collecting metrics is by querying the Prometheus's REST API. To this end, we propose implementing an HTTP polling mechanism that collects metrics from Prometheus REST API at particular pace. Once the metrics have been collected from Prometheus, the data must to be transformed into a consumable format by the NGSI-LD context broker. Apache NiFi framework has been chosen to fulfil this role. The following figure displays a scenario where NiFi collects data from Prometheus. NiFi transforms the metric data into the NGSI-LD model that was previously shown, and sends the encoded data to the Context Broker.
On the other hand, an NGSI-LD subscription has been created on behalf of a consumer represented by NiFi. Therefore, upon creating and/or updating metrics within the Context Broker, a NGSI-LD notification is sent to NiFi. In order to consume and process such notification, NiFi has to implement a mechanism able to receive HTTP messages.
To run the experiments, we have built Docker-based testbed. We leverage docker-compose to ease the process of setting up the components of the testbed as microservices. The default scenario runs NGSI-LD Scorpio broker, although this should be extended in order to include other NGSI-LD Context Broker implementations.
- Docker (Tested with version 19.03.13)
- Docker-compose (Tested with version 1.27.4)
-
Start the prototype by running docker-compose:
docker-compose -f scorpio-compose.yml up
In case you are interested in running the prototype in background (kafka or scorpio logs may be annoying), use the following command:
docker-compose -f scorpio-compose.yml up -d
-
To subscribe to NGSI-LD TimeSeries entities from NiFi, run the following query using the cURL command:
curl --location --request POST 'http://localhost:9090/ngsi-ld/v1/subscriptions/' \ --header 'Content-Type: application/json' \ --header 'Link: <http://context-catalog:8080/context.jsonld>; rel="http://www.w3.org/ns/json-ld#context"; type="application/ld+json"' \ --data-raw '{ "id": "urn:ngsi-ld:Subscription:TimeSeries:scorpio-subs", "type": "Subscription", "entities": [{ "type": "TimeSeries" }], "notification": { "endpoint": { "uri": "http://nifi:18080/notify", "accept": "application/json" } } }'
In case you need to delete the subscription to NGSI-LD TimeSeries entities from NiFi, run the following query using the cURL command:
curl --location --request DELETE 'http://localhost:9090/ngsi-ld/v1/subscriptions/urn:ngsi-ld:Subscription:TimeSeries:scorpio-subs'
-
Upload the Prometheus-Context Broker template to NiFi. Deploy the template into the canvas and follow the instructions that are included. Note that parameter context must be configured based on the Scorpio connection details.
-
Once you are done running tests, tear the scenario down by issuing the following command - run the command twice in case the executions gets stuck at some service:
docker-compose -f scorpio-compose.yml down
-
Start the prototype by running docker-compose:
docker-compose -f orion-compose.yml up
In case you are interested in running the prototype in background, execute the following command:
docker-compose -f orion-compose.yml up -d
-
To subscribe to NGSI-LD TimeSeries entities from NiFi, run the following query using the cURL command:
curl --location --request POST 'http://localhost:1026/ngsi-ld/v1/subscriptions/' \ --header 'Content-Type: application/json' \ --header 'Link: <http://context-catalog:8080/context.jsonld>; rel="http://www.w3.org/ns/json-ld#context"; type="application/ld+json"' \ --data-raw '{ "id": "urn:ngsi-ld:Subscription:TimeSeries:orion-subs", "type": "Subscription", "entities": [{ "type": "TimeSeries" }], "notification": { "endpoint": { "uri": "http://nifi:18080/notify", "accept": "application/json" } } }'
In case you need to delete the subscription to NGSI-LD TimeSeries entities from NiFi, run the following query using the cURL command:
curl --location --request DELETE 'http://localhost:1026/ngsi-ld/v1/subscriptions/urn:ngsi-ld:Subscription:TimeSeries:orion-subs'
-
Upload the Prometheus-Context Broker template to NiFi. Deploy the template into the canvas and follow the instructions that are included. Note that parameter context must be configured based on the Orion-LD connection details.
-
Once you are done running tests, tear the scenario down by issuing the following command - run the command twice in case the executions gets stuck at some service:
docker-compose -f orion-compose.yml down
-
Start the prototype by running docker-compose:
docker-compose -f kafka-compose.yml up
In case you are interested in running the prototype in background (kafka logs may be annoying), use the following command:
docker-compose -f kafka-compose.yml up -d
-
Upload the Prometheus-Kafka AVRO template to NiFi. Deploy the template into the canvas and follow the instructions that are included.
-
Once you are done running tests, tear the scenario down by issuing the following command - run the command twice in case the executions gets stuck at some service:
docker-compose -f kafka-compose.yml down
-
Start the prototype by running docker-compose:
docker-compose -f kafka-compose.yml up
In case you are interested in running the prototype in background (kafka logs may be annoying), use the following command:
docker-compose -f kafka-compose.yml up -d
-
Upload the Prometheus-Kafka NGSI-LD template to NiFi. Deploy the template into the canvas and follow the instructions that are included.
-
Once you are done running tests, tear the scenario down by issuing the following command - run the command twice in case the executions gets stuck at some service:
docker-compose -f kafka-compose.yml down
This repository provides a Jupyter Notebook to process and visualize the results from the experiments. To find more information on how to install and run Jupyter Notebook, please visit https://jupyter.org/install.html.