Skip to content

Commit

Permalink
Merge pull request #48 from CDOT-CV/develop
Browse files Browse the repository at this point in the history
Revised Documentation for Accuracy & Transitioned to USDOT fork of asn1c repository
  • Loading branch information
dan-du-car authored Jun 11, 2024
2 parents 5dee58f + 0e1da0a commit 681cf4a
Show file tree
Hide file tree
Showing 18 changed files with 245 additions and 227 deletions.
6 changes: 3 additions & 3 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[submodule "pugixml"]
path = pugixml
url = https://github.com/zeux/pugixml.git
[submodule "asn1c"]
path = asn1c
url = https://github.com/mouse07410/asn1c
[submodule "usdot-asn1c"]
path = usdot-asn1c
url = https://github.com/usdot-fhwa-stol/usdot-asn1c
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ADD ./pugixml /asn1_codec/pugixml
RUN cd /asn1_codec/pugixml && mkdir -p build && cd build && cmake .. && make && make install

# Build and install asn1c submodule
ADD ./asn1c /asn1_codec/asn1c
ADD ./usdot-asn1c /asn1_codec/asn1c
RUN cd asn1c && test -f configure || autoreconf -iv && ./configure && make && make install

# Make generated files available to the build & compile example
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.dev
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ADD ./pugixml /asn1_codec/pugixml
RUN cd /asn1_codec/pugixml && mkdir -p build && cd build && cmake .. && make && make install

# Build and install asn1c submodule
ADD ./asn1c /asn1_codec/asn1c
ADD ./usdot-asn1c /asn1_codec/asn1c
RUN cd asn1c && test -f configure || autoreconf -iv && ./configure && make && make install

# Make generated files available to the build & compile example
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.standalone
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ ADD ./pugixml /asn1_codec/pugixml
RUN cd /asn1_codec/pugixml && mkdir -p build && cd build && cmake .. && make && make install

# Build and install asn1c submodule
ADD ./asn1c /asn1_codec/asn1c
ADD ./usdot-asn1c /asn1_codec/asn1c
RUN cd asn1c && test -f configure || autoreconf -iv && ./configure && make && make install

# Make generated files available to the build & compile example
Expand Down
51 changes: 24 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ one of two functions depending on how it is started:

1. **Decode**: This function is used to process messages *from* the connected
vehicle environment *to* ODE subscribers. Specifically, the ACM extacts binary
data from consumed messages (ODE Metatdata Messages) and decodes the binary
data from consumed messages (ODE Metadata Messages) and decodes the binary
ASN.1 data into a structure that is subsequently encoded into an alternative
format more suitable for ODE subscribers (currently XML using XER).

Expand All @@ -17,22 +17,25 @@ is subsequently *encoded into ASN.1 binary data*.

![ASN.1 Codec Operations](docs/graphics/asn1codec-operations.png)

# Table of Contents
## Table of Contents

**README.md**
1. [Release Notes](#release-notes)
1. [Getting Involved](#getting-involved)
1. [Documentation](#documentation)
1. [Generating C Files from ASN.1 Definitions](#generating-c-files-from-asn1-definitions)
1. [Confluent Cloud Integration](#confluent-cloud-integration)

**Other Documents**
1. [Installation](docs/installation.md)
1. [Configuration and Operation](docs/configuration.md)
1. [Interface](docs/interface.md)
1. [Testing](docs/testing.md)
1. [Project Management](#project-management)
1. [Confluent Cloud Integration](#confluent-cloud-integration)

## Release Notes
The current version and release history of the asn1_codec: [asn1_codec Release Notes](<docs/Release_notes.md>)

# Getting Involved
## Getting Involved

This project is sponsored by the U.S. Department of Transportation and supports Operational Data Environment data type
conversions. Here are some ways you can start getting involved in this project:
Expand All @@ -42,7 +45,7 @@ conversions. Here are some ways you can start getting involved in this project:
- If you would like to improve this code base or the documentation, [fork the project](https://github.com/usdot-jpo-ode/asn1_codec#fork-destination-box) and submit a pull request.
- If you find a problem with the code or the documentation, please submit an [issue](https://github.com/usdot-jpo-ode/asn1_codec/issues/new).

## Introduction
### Introduction

This project uses the [Pull Request Model](https://help.github.com/articles/using-pull-requests). This involves the following project components:

Expand All @@ -56,7 +59,7 @@ request to the organization asn1_codec project. One the project's main developer
or, if there are issues, discuss them with the submitter. This will ensure that the developers have a better
understanding of the code base *and* we catch problems before they enter `master`. The following process should be followed:

## Initial Setup
### Initial Setup

1. If you do not have one yet, create a personal (or organization) account on GitHub (assume your account name is `<your-github-account-name>`).
1. Log into your personal (or organization) account.
Expand All @@ -66,19 +69,19 @@ understanding of the code base *and* we catch problems before they enter `master
$ git clone https://github.com/<your-github-account-name>/asn1_codec.git
```

## Additional Resources for Initial Setup
### Additional Resources for Initial Setup

- [About Git Version Control](http://git-scm.com/book/en/v2/Getting-Started-About-Version-Control)
- [First-Time Git Setup](http://git-scm.com/book/en/Getting-Started-First-Time-Git-Setup)
- [Article on Forking](https://help.github.com/articles/fork-a-repo)

# Documentation
## Documentation

This documentation is in the `README.md` file. Additional information can be found using the links in the [Table of
Contents](#table-of-contents). All stakeholders are invited to provide input to these documents. Stakeholders should
direct all input on this document to the JPO Product Owner at DOT, FHWA, or JPO.

## Code Documentation
### Code Documentation

Code documentation can be generated using [Doxygen](https://www.doxygen.org) by following the commands below:

Expand All @@ -89,36 +92,30 @@ $ doxygen
```

The documentation is in HTML and is written to the `<install root>/asn1_codec/docs/html` directory. Open `index.html` in a
browser.

## Project Management
browser.

This project is managed using the Jira tool.
## Generating C Files from ASN.1 Definitions
Check here for instructions on how to generate C files from ASN.1 definitions: [ASN.1 C File Generation](asn1c_combined/README.md)

- [Jira Project Portal](https://usdotjpoode.atlassian.net/secure/Dashboard.jsp)
This should only be necessary if the ASN.1 definitions change. The generated files are already included in the repository.

# Confluent Cloud Integration
## Confluent Cloud Integration
Rather than using a local kafka instance, the ACM can utilize an instance of kafka hosted by Confluent Cloud via SASL.

## Environment variables
### Purpose & Usage
### Environment variables
#### Purpose & Usage
- The DOCKER_HOST_IP environment variable is used to communicate with the bootstrap server that the instance of Kafka is running on.
- The KAFKA_TYPE environment variable specifies what type of kafka connection will be attempted and is used to check if Confluent should be utilized.
- The CONFLUENT_KEY and CONFLUENT_SECRET environment variables are used to authenticate with the bootstrap server.

### Values
#### Values
- DOCKER_HOST_IP must be set to the bootstrap server address (excluding the port)
- KAFKA_TYPE must be set to "CONFLUENT"
- CONFLUENT_KEY must be set to the API key being utilized for CC
- CONFLUENT_SECRET must be set to the API secret being utilized for CC

## CC Docker Compose File
### CC Docker Compose File
There is a provided docker-compose file (docker-compose-confluent-cloud.yml) that passes the above environment variables into the container that gets created. Further, this file doesn't spin up a local kafka instance since it is not required.

## Note
This has only been tested with Confluent Cloud but technically all SASL authenticated Kafka brokers can be reached using this method.

# Generating C Files from ASN.1 Definitions
Check here for instructions on how to generate C files from ASN.1 definitions: [ASN.1 C File Generation](asn1c_combined/README.md)

This should only be necessary if the ASN.1 definitions change. The generated files are already included in the repository.
### Note
This has only been tested with Confluent Cloud but technically all SASL authenticated Kafka brokers can be reached using this method.
12 changes: 12 additions & 0 deletions asn1c_combined/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,18 @@ This script expects the necessary files to have already been generated by the `g
## Generating the C Code
The necessary files can be generated using the `generate-files.sh` script. This script will reference the necessary ASN.1 files from the `j2735-asn-files` directory from the specified year and generate the C code in the `generated-files` directory.

### Installing asn1c
The `generate-files.sh` script requires the `asn1c` compiler to be installed. The `asn1c` compiler can be installed in WSL using the following commands:

```bash
cd ./usdot-asn1c
aclocal
test -f configure || autoreconf -iv
./configure
make
sudo make install
```

### J2735 ASN Files
The 'j2735-asn-files' subdirectory should contain the ASN.1 files for the J2735 standard. These are organized by year.

Expand Down
Binary file modified asn1c_combined/generated-files/2016.tar.gz
Binary file not shown.
Binary file modified asn1c_combined/generated-files/2020.tar.gz
Binary file not shown.
4 changes: 2 additions & 2 deletions build_local.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ initializeSubmodules(){
buildAsn1c(){
# build asn1c
echo "${GREEN}Building asn1c${NC}"
cd ./asn1c
cd ./usdot-asn1c
git reset --hard
git pull origin master
git pull origin vlm_master
aclocal
test -f ./configure || autoreconf -iv
./configure
Expand Down
14 changes: 14 additions & 0 deletions docs/Release_notes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
asn1_codec Release Notes
----------------------------

Version 2.1.0, released June 2024
----------------------------------------
### **Summary**
The changes for the asn1_codec 2.1.0 release include revised documentation for accuracy and improved clarity and readability. Additionally, there has been a transition to using `usdot-fhwa-stol/usdot-asn1c` instead of `mouse07410/asn1c` for the ASN.1 Compiler.

Enhancements in this release:
- CDOT PR 25: Revised documentation for accuracy & improved clarity/readability
- CDOT PR 26: Transitioned to using `usdot-fhwa-stol/usdot-asn1c` instead of `mouse07410/asn1c` for ASN.1 Compiler

Known Issues:
- The do_kafka_test.sh script in the project's root directory is currently not running successfully. The issue is being investigated and will be addressed in a future update.
- According to Valgrind, a minor memory leak has been detected. The development team is aware of this and is actively working on resolving it.


Version 2.0.0, released February 2024
----------------------------------------

Expand Down
61 changes: 40 additions & 21 deletions docs/configuration.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# ACM Operation
# ACM Configuration & Operation

The ASN.1 Codec Module (ACM) processes Kafka data streams that preset [ODE
Metadata](http://github.com/usdot-jpo-ode/jpo-ode/blob/develop/docs/Metadata_v3.md) wrapped ASN.1 data. It can perform
one of two functions depending on how it is started:

1. **Decode**: This function is used to process messages *from* the connected
vehicle environment *to* ODE subscribers. Specifically, the ACM extacts binary
data from consumed messages (ODE Metatdata Messages) and decodes the binary
data from consumed messages (ODE Metadata Messages) and decodes the binary
ASN.1 data into a structure that is subsequently encoded into an alternative
format more suitable for ODE subscribers (currently XML using XER).

Expand All @@ -15,7 +15,7 @@ the connected vehicle environment. Specifically, the ACM extracts
human-readable data from ODE Metadata and decodes it into a structure that
is subsequently *encoded into ASN.1 binary data*.

![ASN.1 Codec Operations](https://github.com/usdot-jpo-ode/asn1_codec/blob/master/docs/graphics/asn1codec-operations.png)
![ASN.1 Codec Operations](graphics/asn1codec-operations.png)

## ACM Command Line Options

Expand All @@ -41,7 +41,23 @@ line options override parameters specified in the configuration file.** The foll
-v | --log-level : The info log level [trace,debug,info,warning,error,critical,off]
```
# ACM Deployment
## Environment Variables
The following environment variables are used by the ACM:
-------------------
| Variable | Description |
| --- | --- |
| `DOCKER_HOST_IP` | The IP address of the machine running the kafka cluster. |
| `ACM_LOG_TO_CONSOLE` | Whether or not to log to the console. |
| `ACM_LOG_TO_FILE` | Whether or not to log to a file. |
| `ACM_LOG_LEVEL` | The log level to use. Valid values are: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL", "OFF" |
| `KAFKA_TYPE` | If unset, a local kafka broker will be targeted. If set to "CONFLUENT", the application will target a Confluent Cloud cluster. |
| `CONFLUENT_KEY` | Confluent Cloud Integration (if KAFKA_TYPE is set to "CONFLUENT") |
| `CONFLUENT_SECRET` | Confluent Cloud Integration (if KAFKA_TYPE is set to "CONFLUENT") |
The `sample.env` file contains the default values for some of these environment variables. To use these values, copy the `sample.env` file to `.env` and modify the values as needed.
## ACM Deployment
Once the ACM is [installed and configured](installation.md) it operates as a background service. The ACM can be started
before or after other services. If started before the other services, it may produce some error messages while it waits
Expand All @@ -55,7 +71,7 @@ Multiple ACM processes can be started. Each ACM will use its own configuration f
deploy a decoder and an encoder, start two separate ACM services with different `-T` options. In this case different
topics should be specified in the configuration files.
# ACM ASN.1 Data Sources
## ACM ASN.1 Data Sources
The ACM receives XML data from the ODE; the schema for this XML message is described in
[Metadata.md](https://github.com/usdot-jpo-ode/jpo-ode/blob/develop/docs/Metadata_v3.md) on the ODE. Currently, the ACM
Expand Down Expand Up @@ -84,23 +100,23 @@ can handle:
\* Denotes the message should already contain hex data, according the the ASN.1 specification for that message.
For instance, IEEE 1609.2 must contain hex data in its `unsecuredData` tag. If the hex data is missing or invalid,
the ACM with likely generate an error when doing constraint checking.
the ACM will likely generate an error when doing constraint checking.
- After ENCODING this text is changed to: `us.dot.its.jpo.ode.model.OdeHexByteArray`
- After DECODING this text is changed to: `us.dot.its.jpo.ode.model.OdeXml`
Both the ENCODER and DECODER will check the ASN.1 constraints for the C structures that are built as data passes through
the module.
# ACM Kafka Limitations
## ACM Kafka Limitations
With regard to the Apache Kafka architecture, each ACM process does **not** provide a way to take advantage of Kafka's scalable
architecture. In other words, each ACM process will consume data from a single Kafka topic and a single partition within
that topic. One way to consume topics with multiple partitions is to launch one ACM process for each partition; the
configuration file will allow you to designate the partition. In the future, the ACM may be updated to automatically
handle multiple partitions within a single topic.
# ACM Logging
## ACM Logging
ACM operations are optionally logged to the console and/or to a file. The file is a rotating log file, i.e., a set number of log files will
be used to record the ACM's information. By default, the file is in a `logs` directory from where the ACM is launched and the file is
Expand Down Expand Up @@ -136,7 +152,7 @@ preceeded with a date and time stamp and the level of the log message.
[171011 18:25:55.221442] [trace] ending configure()
```
# ACM Configuration
## ACM Configuration
The ACM configuration file is a text file with a prescribed format. It can be used to configure Kafka as well as the ACM.
Comments can be added to the configuration file by starting a line with the '#' character. Configuration lines consist
Expand Down Expand Up @@ -171,7 +187,7 @@ Example configuration files can be found in the [asn1_codec/config](../config) d
The details of the settings and how they affect the function of the ACM follow:
## ODE Kafka Interface
### ODE Kafka Interface
- `asn1.topic.producer` : The Kafka topic name where the ACM will write its output. **The name is case sensitive.**
Expand All @@ -194,20 +210,18 @@ The details of the settings and how they affect the function of the ACM follow:
- `compression.type` : The type of compression to use for writing to Kafka topics. Currently, this should be set to none.
# ACM Testing with Kafka
## ACM Testing with Kafka
There are four steps that need to be started / run as separate processes.
The necessary services for testing the ACM with Kafka are provided in the `docker-compose.yml` file. The following steps will guide you through the process of testing the ACM with Kafka.
1. Start the `kafka-docker` container to provide the basic kafka data streaming services.
```bash
$ docker-compose up --no-recreate -d
1. Start the Kafka & ACM services via the provided `docker-compose.yml` file.
```
$ docker compose up --build -d
```
1. Start the ACM (here we are starting a decoder).
```bash
$ ./acm -c config/example.properties -T decode
1. Exec into the Kafka container to gain access to the Kafka command line tools.
```
$ docker exec -it asn1_codec_kafka_1 /bin/bash
```
1. Use the provided `kafka-console-producer.sh` script (provided with the Apache Kafka installation) to send XML
Expand All @@ -225,4 +239,9 @@ $ ./bin/kafka-console-consumer.sh --bootstrap-server ${SERVER_IP} --topic ${ACM_
```
The log files will provide more information about the details of the processing that is taking place and document any
errors.
errors. To view log files, exec into the ACM container and use the `tail` command to view the log file.
```bash
$ docker exec -it asn1_codec_acm_1 /bin/bash
$ tail -f logs/log
```
Loading

0 comments on commit 681cf4a

Please sign in to comment.