Skip to content

Latest commit

 

History

History
247 lines (186 loc) · 12.7 KB

configuration.md

File metadata and controls

247 lines (186 loc) · 12.7 KB

ACM Configuration & Operation

The ASN.1 Codec Module (ACM) processes Kafka data streams that preset ODE Metadata wrapped ASN.1 data. It can perform one of two functions depending on how it is started:

  1. Decode: This function is used to process messages from the connected vehicle environment to ODE subscribers. Specifically, the ACM extacts binary data from consumed messages (ODE Metadata Messages) and decodes the binary ASN.1 data into a structure that is subsequently encoded into an alternative format more suitable for ODE subscribers (currently XML using XER).

  2. Encode: This function is used to process messages from the ODE to the connected vehicle environment. Specifically, the ACM extracts human-readable data from ODE Metadata and decodes it into a structure that is subsequently encoded into ASN.1 binary data.

ASN.1 Codec Operations

ACM Command Line Options

The ACM can be started by specifying only the configuration file. Command line options are also available. Command line options override parameters specified in the configuration file. The following command line options are available:

-T | --codec-type      : The type of codec to use: decode or encode; defaults to decode
-h | --help            : print out some help
-i | --log             : Log file name.
-R | --log-rm          : Remove specified/default log files if they exist.
-D | --log-dir         : Directory for the log files.
-F | --infile          : accept a file and bypass kafka.
-t | --produce-topic   : The name of the topic to produce.
-p | --partition       : Consumer topic partition from which to read.
-C | --config-check    : Check the configuration file contents and output the settings.
-o | --offset          : Byte offset to start reading in the consumed topic.
-d | --debug           : debug level.
-c | --config          : Configuration file name and path.
-g | --group           : Consumer group identifier
-b | --broker          : Broker address (localhost:9092)
-x | --exit            : Exit consumer when last message in partition has been received.
-v | --log-level       : The info log level [trace,debug,info,warning,error,critical,off]

Environment Variables

The following environment variables are used by the ACM:


Variable Description
DOCKER_HOST_IP The IP address of the machine running the kafka cluster.
ACM_LOG_TO_CONSOLE Whether or not to log to the console.
ACM_LOG_TO_FILE Whether or not to log to a file.
ACM_LOG_LEVEL The log level to use. Valid values are: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL", "OFF"
KAFKA_TYPE If unset, a local kafka broker will be targeted. If set to "CONFLUENT", the application will target a Confluent Cloud cluster.
CONFLUENT_KEY Confluent Cloud Integration (if KAFKA_TYPE is set to "CONFLUENT")
CONFLUENT_SECRET Confluent Cloud Integration (if KAFKA_TYPE is set to "CONFLUENT")

The sample.env file contains the default values for some of these environment variables. To use these values, copy the sample.env file to .env and modify the values as needed.

ACM Deployment

Once the ACM is installed and configured it operates as a background service. The ACM can be started before or after other services. If started before the other services, it may produce some error messages while it waits to connect. The following command will start the ACM as a decoder based on a configuration file.

$ ./acm -c <configuration file> -T decode

Multiple ACM processes can be started. Each ACM will use its own configuration file. If you want to deploy a decoder and an encoder, start two separate ACM services with different -T options. In this case different topics should be specified in the configuration files.

ACM ASN.1 Data Sources

The ACM receives XML data from the ODE; the schema for this XML message is described in Metadata.md on the ODE. Currently, the ACM extracts either a hex encoded string or an XML child document in the \\payload\data branch and decodes or encodes that data into the desired output form. The converted data is re-inserted into the XML document and that document is produced to the output topic. Metadata fields determine which type of encoding/decoding is needed and how to search for the correct data within the decoded documents.

When decoding data from the CV environment, the ACM processes either IEEE 1609.2 wrapped J2735 MessageFrames (any type) or MessageFrames by themselves. If 1609.2 wrapped, the ACM finds and extracts the unsecuredData to further decode the J2735 MessageFrame. The decoder uses the elementType and encoderRule tags to determine which types of decoding to perform.

When encoding data, the ACM can encode combinations Advisory Situation Data, IEEE 1609.2, and J2735 MessageFrames. Advisory Situation Data and IEEE 1609.2 can both contain a wrapped J2735 MessageFrame, and Advisory Situation Data frames can contained wrapped IEEE 1609.2 data. Therefore, there are 7 possible combinations of data types the encoding module can handle:

  • J2735 MessageFrame
  • Advisory Situation Data*
  • IEEE 1609.2*
  • Advisory Situation Data wrapping a J2735 MessageFrame
  • IEEE 1609.2 wrapping a J2735 MessageFrame
  • Advisory Situation Data wrapping IEEE 1609.2* data
  • Advisory Situation Data wrapping IEEE 1609.2 wrapping a J2735 MessageFrame

* Denotes the message should already contain hex data, according the the ASN.1 specification for that message. For instance, IEEE 1609.2 must contain hex data in its unsecuredData tag. If the hex data is missing or invalid, the ACM will likely generate an error when doing constraint checking.

  • After ENCODING this text is changed to: us.dot.its.jpo.ode.model.OdeHexByteArray
  • After DECODING this text is changed to: us.dot.its.jpo.ode.model.OdeXml

Both the ENCODER and DECODER will check the ASN.1 constraints for the C structures that are built as data passes through the module.

ACM Kafka Limitations

With regard to the Apache Kafka architecture, each ACM process does not provide a way to take advantage of Kafka's scalable architecture. In other words, each ACM process will consume data from a single Kafka topic and a single partition within that topic. One way to consume topics with multiple partitions is to launch one ACM process for each partition; the configuration file will allow you to designate the partition. In the future, the ACM may be updated to automatically handle multiple partitions within a single topic.

ACM Logging

ACM operations are optionally logged to the console and/or to a file. The file is a rotating log file, i.e., a set number of log files will be used to record the ACM's information. By default, the file is in a logs directory from where the ACM is launched and the file is named log. The maximum size of a log file is 5MB and 5 files are rotated. Logging configuration is controlled through the command line, not through the configuration file. The following operation are available:

  • -R : When the ACM starts remove any log files having either the default or user specified names; otherwise, new log entries will be appended to existing files.

  • -D : The directory where the log files should be written. This can be relative or absolute. If the directory does not exist, it will be created.

  • -i : The log file's name.

  • -v : The minimum level of message to write to the log. From lowest to highest, the message levels are off, trace, debug, info, warning, error, critical. As an example, if you specify info then all messages that are info, warning, error, or critical will be written to the log.

The log will write the configuration it will use as info messages when it starts. All log messages are preceeded with a date and time stamp and the level of the log message.

[171011 18:25:55.221276] [trace] starting configure()
[171011 18:25:55.221341] [info] using configuration file: config/example.properties
[171011 18:25:55.221407] [info] kafka configuration: group.id = 0
[171011 18:25:55.221413] [info] ASN1_Codec configuration: asn1.j2735.topic.consumer = j2735asn1per 
[171011 18:25:55.221417] [info] ASN1_Codec configuration: asn1.j2735.topic.producer = j2735asn1xer
[171011 18:25:55.221420] [info] ASN1_Codec configuration: asn1.j2735.consumer.timeout.ms = 5000
[171011 18:25:55.221423] [info] ASN1_Codec configuration: asn1.j2735.kafka.partition = 0
[171011 18:25:55.221425] [info] kafka configuration: metadata.broker.list = 172.17.0.1:9092
[171011 18:25:55.221428] [info] ASN1_Codec configuration: compression.type = none
[171011 18:25:55.221434] [info] kafka partition: 0
[171011 18:25:55.221440] [info] consumed topic: j2735asn1per
[171011 18:25:55.221441] [info] published topic: j2735asn1xer
[171011 18:25:55.221442] [trace] ending configure()

ACM Configuration

The ACM configuration file is a text file with a prescribed format. It can be used to configure Kafka as well as the ACM. Comments can be added to the configuration file by starting a line with the '#' character. Configuration lines consist of two strings separated by a '=' character; lines are terminated by newlines. The names of configuration files can be anything; extensions do not matter.

The following is an example of a portion of a configuration file:

# Kafka group.
group.id=0

# Kafka topics for ASN.1 Parsing
asn1.topic.consumer=j2735asn1per
asn1.topic.producer=j2735asn1xer

# Amount of time to wait when no message is available (milliseconds)
asn1.consumer.timeout.ms=5000

# For testing purposes, use one partition.
asn1.kafka.partition=0

# The host ip address for the Broker.
metadata.broker.list=localhost:9092

# specify the compression codec for all data generated: none, gzip, snappy, lz4
compression.type=none

Example configuration files can be found in the asn1_codec/config directory, e.g., example.properties is an example of a complete configuration file.

The details of the settings and how they affect the function of the ACM follow:

ODE Kafka Interface

  • asn1.topic.producer : The Kafka topic name where the ACM will write its output. The name is case sensitive.

  • asn1.topic.consumer : The Kafka topic name used by the Operational Data Environment (or other producer) that will be consumed by the ACM. The name is case sensitive.

  • asn1.consumer.timeout.ms : The amount of time the consumer blocks (or waits) for a new message. If a message is received before this time has elapsed it will be processed immediately.

  • group.id : The group identifier for the ACM consumer. Consumers label themselves with a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.

  • asn1.kafka.partition : The partition(s) consumed by this ACM. A Kafka topic can be divided, or partitioned, into several "parallel" streams. A topic may have many partitions so it can handle an arbitrary amount of data.

  • metadata.broker.list : This is the IP address of the Kafka topic broker leader.

  • compression.type : The type of compression to use for writing to Kafka topics. Currently, this should be set to none.

ACM Testing with Kafka

The necessary services for testing the ACM with Kafka are provided in the docker-compose.yml file. The following steps will guide you through the process of testing the ACM with Kafka.

  1. Start the Kafka & ACM services via the provided docker-compose.yml file.
$ docker compose up --build -d
  1. Exec into the Kafka container to gain access to the Kafka command line tools.
$ docker exec -it asn1_codec_kafka_1 /bin/bash
  1. Use the provided kafka-console-producer.sh script (provided with the Apache Kafka installation) to send XML messages to the ACM. Each time you execute the command below a single message is sent to the ACM.
$ cat <message> | ./bin/kafka-console-producer.sh --broker-list ${SERVER_IP} --topic ${ACM_INPUT_TOPIC}
  1. Use the provided kafka-console-consumer.sh script (provided with the Apache Kafka installation) to receive XML messages from the ACM. This process with wait for messages to be published by the ACM.
$ ./bin/kafka-console-consumer.sh --bootstrap-server ${SERVER_IP} --topic ${ACM_OUTPUT_TOPIC}

The log files will provide more information about the details of the processing that is taking place and document any errors. To view log files, exec into the ACM container and use the tail command to view the log file.

$ docker exec -it asn1_codec_acm_1 /bin/bash
$ tail -f logs/log