This project has as prerequisite the formula1-telemetry library, for decoding the raw UDP telemetry packets, that has to be installed in the local Maven repository. In order to do so, follow the instructions on the corresponding GitHub repository.
After installing the decoding library, to build all the components, just run the following command.
mvn package
If you want to run the applications as containers even locally or on Kubernetes, a Google Jib configuration is available for building the corresponding Docker images.
mvn package jib:dockerBuild
The above command will build the Docker images locally; you have to push them to the registry you prefer manually.
The overall Apache Kafka, InfluxDB and Grafana stack can be deployed locally or even on Kubernetes. For deploying it locally, just download the latest releases from the corresponding repositories or websites and follow the official instructions to run them.
- Apache Kafka: the latest release can be downloaded from here and the quickstart can walk you through the deployment.
- InfluxDB: the latest release can be downloaded from here and the get started guide helps to run it.
- Grafana: the latest release can be downloaded from here and the installation guide helps to install it.
For deploying it on Kubernetes, you can apply the Grafana and InfluxDB Deployment
resources first.
kubectl apply -f deployment/influxdb.yaml
kubectl apply -f deployment/grafana.yaml
To access Grafana from outside the Kubernetes cluster you can create a corresponding Ingress
resource (or even a Route
is using OpenShift).
The dashboards can be imported from the deployment/dashboard
folder.
Regarding Apache Kafka, the simpler way is to use the Strimzi project, and you can find all the information on the official documentation. The quick start guide shows how to install the operator using the YAML files from the latest release. Another way is to use the OperatorHub.io website where the latest operator release is available here with all the instructions to install it.
After installing the operator, you have to create your own Kafka
custom resource for deploying the Apache Kafka cluster through the operator itself.
Some Kafka
custom resource examples are available at the official Strimzi repository here.
The Apache Kafka cluster needs to be accessible from outside the Kubernetes cluster in order to allow the telemetry data coming from the UDP related application (not running on Kubernetes). In order to do so, it is important to configure one "external" listener (with TLS enabled if you prefer). More information are available in the configuring external listeners chapter. If the TLS protocol is enabled on the external listener, you need to get the cluster CA certificate, and the corresponding password for allowing the UDP to Kafka application to connect. More information are available in the configuring external clients to trust the cluster CA chapter.
The UDP to Apache Kafka application has to run locally or anyway within the same network where the F1 2020 game is running (on your preferred console, i.e. Xbox). In this way, it gets the raw telemetry packets sent by the game over UDP and bridges them to Apache Kafka topics. The main parameters for the application can be set via the following environment variables:
KAFKA_BOOTSTRAP_SERVERS
: the bootstrap servers for connecting to the Apache Kafka cluster. Default islocalhost:9092
.KAFKA_TLS_ENABLED
: if TLS has to be enabled for connecting to the Apache Kafka cluster. Default isfalse
. NOTE: if it is enabled but the a truststore is not provided, then the JVM system truststore is used by default.KAFKA_TRUSTSTORE_LOCATION
: the path to the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. Not set by default.KAFKA_TRUSTSTORE_PASSWORD
: the password for the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. No set by default.KAFKA_SASL_MECHANISM
: SASL mechanism to be used for authentication.PLAIN
mechanism is supported. Not set by default.KAFKA_SASL_USERNAME
: username to be used if SASL mechanism isPLAIN
. Not set by default.KAFKA_SASL_PASSWORD
: password to be used if SASL mechanism isPLAIN
. Not set by default.
Other available environment variables are:
UDP_PORT
: the UDP port on which listening for the raw telemetry packets coming from the F1 2020 game. Default is20777
.F1_DRIVERS_TOPIC
: Apache Kafka topic to whichDriver
messages are sent. Default isf1-telemetry-drivers
.F1_EVENTS_TOPIC
: Apache Kafka topic to whichEvent
messages are sent. Default isf1-telemetry-events
.F1_RAW_PACKETS_TOPIC
: Apache Kafka topic to which rawPacket
messages are sent. Default isf1-telemetry-packets
.
After setting the needed environment variables, you can start the application running the following command:
java -jar udp-kafka/target/f1-telemetry-udp-kafka-1.0-SNAPSHOT-jar-with-dependencies.jar
The application starts listening on UDP, connects to the Apache Kafka cluster, and the Apache Camel routes are activated for bridging the packets from UDP to topics.
The Apache Kafka to InfluxDB application can run locally or can be deployed on Kubernetes; it depends on where the overall stack is running. The main parameters for the application can be set via the following environment variables:
KAFKA_BOOTSTRAP_SERVERS
: the bootstrap servers for connecting to the Apache Kafka cluster. Default islocalhost:9092
.KAFKA_TLS_ENABLED
: if TLS has to be enabled for connecting to the Apache Kafka cluster. Default isfalse
. NOTE: if it is enabled but the a truststore is not provided, then the JVM system truststore is used by default.KAFKA_TRUSTSTORE_LOCATION
: the path to the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. Not set by default.KAFKA_TRUSTSTORE_PASSWORD
: the password for the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. No set by default.KAFKA_SASL_MECHANISM
: SASL mechanism to be used for authentication.PLAIN
mechanism is supported. Not set by default.KAFKA_SASL_USERNAME
: username to be used if SASL mechanism isPLAIN
. Not set by default.KAFKA_SASL_PASSWORD
: password to be used if SASL mechanism isPLAIN
. Not set by default.INFLUXDB_URL
: the URL of the InfluxDB HTTP REST API. Default ishttp://localhost:8086
.
Other available environment variables are:
INFLUXDB_DB
: the InfluxDB database where measurements will be stored. Default isformula1
.F1_DRIVERS_TOPIC
: Apache Kafka topic from whichDriver
messages are read. Default isf1-telemetry-drivers
.F1_EVENTS_TOPIC
: Apache Kafka topic from whichEvent
messages are read. Default isf1-telemetry-events
.F1_DRIVERS_AVG_SPEED_TOPIC
: Apache Kafka topic from which messages with processed average speed are read. Default isf1-telemetry-drivers-avg-speed
.
You can set the environment variables locally and then running the application with following command.
java -jar kafka-influxdb/target/f1-telemetry-kafka-influxdb-1.0-SNAPSHOT-jar-with-dependencies.jar
Or you can even deploy the application to Kubernetes by customizing the environment variables in the env
section of the Apache Kafka to InfluxDB Deployment
then applying the resource.
kubectl apply -f deployment/f1-telemetry-kafka-influxdb.yaml
The Apache Kafka Streams application can run locally or can be deployed on Kubernetes; it depends on where the overall stack is running. The main parameters for the application can be set via the following environment variables:
KAFKA_BOOTSTRAP_SERVERS
: the bootstrap servers for connecting to the Apache Kafka cluster. Default islocalhost:9092
.KAFKA_TLS_ENABLED
: if TLS has to be enabled for connecting to the Apache Kafka cluster. Default isfalse
. NOTE: if it is enabled but the a truststore is not provided, then the JVM system truststore is used by default.KAFKA_TRUSTSTORE_LOCATION
: the path to the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. Not set by default.KAFKA_TRUSTSTORE_PASSWORD
: the password for the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. No set by default.KAFKA_SASL_MECHANISM
: SASL mechanism to be used for authentication.PLAIN
mechanism is supported. Not set by default.KAFKA_SASL_USERNAME
: username to be used if SASL mechanism isPLAIN
. Not set by default.KAFKA_SASL_PASSWORD
: password to be used if SASL mechanism isPLAIN
. Not set by default.F1_STREAMS_INTERNAL_REPLICATION_FACTOR
: the replication factor for the internal topics that the Kafka Streams application creates (changelog, repartitioning, ...). Default is1
.F1_STREAMS_INPUT_TOPIC
: Apache Kafka topic from whichDriver
messages are read. Default isf1-telemetry-drivers
.F1_STREAMS_OUTPUT_TOPIC
: Apache Kafka topic to which messages with processed average speed are sent. Default isf1-telemetry-drivers-avg-speed
.
You can set the environment variables locally and then running the application with following command.
java -jar streams-avg-speed/target/f1-telemetry-streams-avg-speed-1.0-SNAPSHOT-jar-with-dependencies.jar
Or you can even deploy the application to Kubernetes by customizing the environment variables in the env
section of the Apache Kafka Streams Deployment
then applying the resource.
kubectl apply -f deployment/f1-telemetry-streams-avg-speed.yaml
The Apache Kafka Streams application can run locally or can be deployed on Kubernetes; it depends on where the overall stack is running. The main parameters for the application can be set via the following environment variables:
KAFKA_BOOTSTRAP_SERVERS
: the bootstrap servers for connecting to the Apache Kafka cluster. Default islocalhost:9092
.KAFKA_TLS_ENABLED
: if TLS has to be enabled for connecting to the Apache Kafka cluster. Default isfalse
. NOTE: if it is enabled but the a truststore is not provided, then the JVM system truststore is used by default.KAFKA_TRUSTSTORE_LOCATION
: the path to the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. Not set by default.KAFKA_TRUSTSTORE_PASSWORD
: the password for the truststore containing certificates to connect to the Apache Kafka cluster when TLS is enabled. No set by default.KAFKA_SASL_MECHANISM
: SASL mechanism to be used for authentication.PLAIN
mechanism is supported. Not set by default.KAFKA_SASL_USERNAME
: username to be used if SASL mechanism isPLAIN
. Not set by default.KAFKA_SASL_PASSWORD
: password to be used if SASL mechanism isPLAIN
. Not set by default.F1_STREAMS_INTERNAL_REPLICATION_FACTOR
: the replication factor for the internal topics that the Kafka Streams application creates (changelog,F1_STREAMS_INPUT_TOPIC
: Apache Kafka topic from whichDriver
messages are read. Default isf1-telemetry-drivers
.F1_STREAMS_OUTPUT_TOPIC
: Apache Kafka topic to which messages with processed best overall time per sector. Default isf1-telemetry-drivers-laps
.
You can set the environment variables locally and then running the application with following command.
java -jar streams-laps/target/f1-telemetry-streams-laps-1.0-SNAPSHOT-jar-with-dependencies.jar
Or you can even deploy the application to Kubernetes by customizing the environment variables in the env
section of the Apache Kafka Streams Deployment
then applying the resource.
kubectl apply -f deployment/f1-telemetry-streams-laps.yaml
If you don't have a console and the F1 game, you can still try the entire pipeline by using the telemetry data stored in the SQLite3 database provided in the databases
repository folder.
It provides telemetry data for just the first lap of the Azerbaijan Grand Prix 2020 (Baku).
In order to use it, you need to install the f1-2020-telemetry Python library which comes with a CLI that allows to read the telemetry data from the SQLite3 database and send them to UDP as they were coming from the F1 game on the console.
f1-2020-telemetry-player F1_2020_BAKU.sqlite3