Skip to content

Commit

Permalink
Merge pull request #88 from CDOT-CV/pr/addressing-usdot-pr-comments-6…
Browse files Browse the repository at this point in the history
…-13-2024

Addressing USDOT PR comments 6/13/2024
  • Loading branch information
payneBrandon authored Jun 13, 2024
2 parents fe6b6d1 + 0663b6d commit 435db60
Show file tree
Hide file tree
Showing 11 changed files with 51 additions and 30 deletions.
2 changes: 1 addition & 1 deletion docs/WYDOT.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The current project goals for the ODE have been developed specifically for the u
- **Collect CV Data:** Connected vehicle data from field may be collected from vehicle OBUs directly or through RSUs. Data collected include Basic Safety Messages Part I and Part 2, Event Logs and other probe data (weather sensors, etc.). These messages are ingested into the operational data environment (ODE) where the data is then further channeled to other subsystems.
- **Support Data Brokerage:** The WYDOT Data Broker is a sub-system that is responsible for interfacing with various WYDOT Transportation Management Center (TMC) systems gathering information on current traffic conditions, incidents, construction, operator actions and road conditions. The data broker then distributes information from PikAlert, the ODE and the WYDOT
interfaces based on business rules. The data broker develops a traveler information message (TIM) for segments on I-80, and provide event or condition information back to the WYDOT interfaces
- **Distribute traveler information messages (TIM):** The data broker distributes the TIM message to the operational data environment (ODE) which will then communicate the message back to the OBUs, RSUs and the situational data warehouse (SDW)
- **Distribute traveler information messages (TIM):** The data broker distributes the TIM message to the operational data environment (ODE) which will then communicate the message back to the OBUs, RSUs and the situational data exchange (SDX)
- **Store data:** Data generated by the system (both from the field and the back-office sub-systems)
are stored in the WYDOT data warehouse.

17 changes: 14 additions & 3 deletions docs/dockerhub.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ services:
ports:
- "9092:9092"
volumes:
- "${DOCKER_SHARED_VOLUME}:/bitnami"
- kafka:/bitnami
environment:
KAFKA_ENABLE_KRAFT: "yes"
KAFKA_CFG_PROCESS_ROLES: "broker,controller"
Expand All @@ -55,15 +55,22 @@ services:
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "1@kafka:9093"
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_CFG_NODE_ID: "1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_CREATE_TOPICS: "topic.OdeBsmPojo:1:1,topic.OdeSpatTxPojo:1:1,topic.OdeSpatPojo:1:1,topic.OdeSpatJson:1:1,topic.FilteredOdeSpatJson:1:1,topic.OdeSpatRxJson:1:1,topic.OdeSpatRxPojo:1:1,topic.OdeBsmJson:1:1,topic.FilteredOdeBsmJson:1:1,topic.OdeTimJson:1:1,topic.OdeTimBroadcastJson:1:1,topic.J2735TimBroadcastJson:1:1,topic.OdeDriverAlertJson:1:1,topic.Asn1DecoderInput:1:1,topic.Asn1DecoderOutput:1:1,topic.Asn1EncoderInput:1:1,topic.Asn1EncoderOutput:1:1,topic.SDWDepositorInput:1:1,topic.OdeTIMCertExpirationTimeJson:1:1,topic.OdeRawEncodedBSMJson:1:1,topic.OdeRawEncodedSPATJson:1:1,topic.OdeRawEncodedTIMJson:1:1,topic.OdeRawEncodedMAPJson:1:1,topic.OdeMapTxPojo:1:1,topic.OdeMapJson:1:1,topic.OdeRawEncodedSSMJson:1:1,topic.OdeSsmPojo:1:1,topic.OdeSsmJson:1:1,topic.OdeRawEncodedSRMJson:1:1,topic.OdeSrmTxPojo:1:1,topic.OdeSrmJson:1:1,topic.OdeRawEncodedPSMJson:1:1,topic.OdePsmTxPojo:1:1,topic.OdePsmJson:1:1"
KAFKA_CFG_DELETE_TOPIC_ENABLE: "true"
KAFKA_CFG_LOG_RETENTION_HOURS: 2
logging:
options:
max-size: "10m"
max-file: "5"
kafka_init:
image: bitnami/kafka:latest
depends_on:
kafka:
condition: service_started
volumes:
- ./scripts/kafka/kafka_init.sh:/kafka_init.sh
entrypoint: ["/bin/sh", "kafka_init.sh"]
ode:
image: usdotjpoode/jpo-ode:release_q3
ports:
Expand All @@ -76,6 +83,7 @@ services:
- "44910:44910/udp"
- "44920:44920/udp"
- "44930:44930/udp"
- "44940:44940/udp"
- "5555:5555/udp"
- "6666:6666/udp"
environment:
Expand All @@ -84,6 +92,9 @@ services:
ODE_SECURITY_SVCS_SIGNATURE_URI: ${ODE_SECURITY_SVCS_SIGNATURE_URI}
ODE_RSU_USERNAME: ${ODE_RSU_USERNAME}
ODE_RSU_PASSWORD: ${ODE_RSU_PASSWORD}
DATA_SIGNING_ENABLED_RSU: ${DATA_SIGNING_ENABLED_RSU}
DATA_SIGNING_ENABLED_SDW: ${DATA_SIGNING_ENABLED_SDW}
DEFAULT_SNMP_PROTOCOL: ${DEFAULT_SNMP_PROTOCOL}
depends_on:
- kafka
volumes:
Expand Down
6 changes: 3 additions & 3 deletions jpo-ode-consumer-example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ The IP used is the location of the Kafka endpoints.
####Create, alter, list, and describe topics.

```
kafka-topics --bootstrap-server=192.168.1.151:9092 --list
kafka-topics --bootstrap-server 192.168.1.151:9092 --list
sink1
t1
t2
Expand All @@ -58,11 +58,11 @@ t2
####Read data from a Kafka topic and write it to standard output.

```
kafka-console-consumer --bootstrap-server=192.168.1.151:9092 --topic topic.J2735Bsm
kafka-console-consumer --bootstrap-server 192.168.1.151:9092 --topic topic.J2735Bsm
```

####Read data from standard output and write it to a Kafka topic.

```
kafka-console-producer --bootstrap-server=192.168.1.151:9092 --topic topic.J2735Bsm
kafka-console-producer --bootstrap-server 192.168.1.151:9092 --topic topic.J2735Bsm
```
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ public class DdsAdvisorySituationData extends Asn1Object {
// transfer
private String recordID; // DSRC.TemporaryID -- used by the provider to overwrite
// existing record(s)
private int timeToLive; // TimeToLive -- indicates how long the SDW should persist
private int timeToLive; // TimeToLive -- indicates how long the SDX should persist
// the record(s)
private DdsGeoRegion serviceRegion; // GeoRegion, -- NW and SE corners of the region
// applicable
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ public class OdeProperties implements EnvironmentAware {
private Integer fileWatcherPeriod = 5; // time to wait between processing inbox directory for new files

/*
* USDOT Situation Data Clearinghouse (SDC)/ Situation Data Warehouse (SDW),
* USDOT Situation Data Clearinghouse (SDC) / Situational Data Exchange (SDX),
* a.k.a Data Distribution System (DDS) Properties
*/
// DDS WebSocket Properties
Expand Down Expand Up @@ -189,7 +189,7 @@ public class OdeProperties implements EnvironmentAware {
private String kafkaTopicAsn1EncoderInput = "topic.Asn1EncoderInput";
private String kafkaTopicAsn1EncoderOutput = "topic.Asn1EncoderOutput";

// SDW Depositor Module
// SDX Depositor Module
private String kafkaTopicSdwDepositorInput = "topic.SDWDepositorInput";

//Signed Tim with expiration
Expand Down
2 changes: 1 addition & 1 deletion jpo-ode-svcs/src/main/resources/application.properties
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ spring.http.multipart.max-request-size=1MB
#ode.uploadLocationBsm = bsm
#ode.uploadLocationMessageFrame = messageframe

#USDOT Situation Data Clearinghouse (SDC)/ Situation Data Warehouse (SDW) Properties
#USDOT Situation Data Clearinghouse (SDC) / Situational Data Exchange (SDX) Properties
#=========================================================================================================================

#RSU Properties (note - do not include quotes)
Expand Down
11 changes: 1 addition & 10 deletions kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,16 +41,7 @@ If you don't specify a broker id in your docker-compose file, it will automatica

### Automatically create topics

If you want to have kafka-docker automatically create topics in Kafka during
creation, a ```KAFKA_CREATE_TOPICS``` environment variable can be
added in ```docker-compose.yml```.

Here is an example snippet from ```docker-compose.yml```:

environment:
KAFKA_CREATE_TOPICS: "Topic1:1:3,Topic2:1:1"

```Topic 1``` will have 1 partition and 3 replicas, ```Topic 2``` will have 1 partition and 1 replica.
If you want to have Kafka automatically create topics during creation, modify the `scripts\kafka\kafka_init.sh` script to include the topics you want to create. The script is run by a kafka init container upon startup. The default script creates all necessary topics with a replication factor of 1. If you want to change the replication factor, modify the script accordingly.

### Advertised hostname

Expand Down
31 changes: 25 additions & 6 deletions quickstart-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ services:
ports:
- "9092:9092"
volumes:
- "${DOCKER_SHARED_VOLUME}:/bitnami"
- kafka:/bitnami
environment:
KAFKA_ENABLE_KRAFT: "yes"
KAFKA_CFG_PROCESS_ROLES: "broker,controller"
Expand All @@ -20,15 +20,22 @@ services:
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "1@kafka:9093"
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_CFG_NODE_ID: "1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_CREATE_TOPICS: "topic.OdeBsmPojo:1:1,topic.OdeSpatTxPojo:1:1,topic.OdeSpatPojo:1:1,topic.OdeSpatJson:1:1,topic.FilteredOdeSpatJson:1:1,topic.OdeSpatRxJson:1:1,topic.OdeSpatRxPojo:1:1,topic.OdeBsmJson:1:1,topic.FilteredOdeBsmJson:1:1,topic.OdeTimJson:1:1,topic.OdeTimBroadcastJson:1:1,topic.J2735TimBroadcastJson:1:1,topic.OdeDriverAlertJson:1:1,topic.Asn1DecoderInput:1:1,topic.Asn1DecoderOutput:1:1,topic.Asn1EncoderInput:1:1,topic.Asn1EncoderOutput:1:1,topic.SDWDepositorInput:1:1,topic.OdeTIMCertExpirationTimeJson:1:1,topic.OdeRawEncodedBSMJson:1:1,topic.OdeRawEncodedSPATJson:1:1,topic.OdeRawEncodedTIMJson:1:1,topic.OdeRawEncodedMAPJson:1:1,topic.OdeMapTxPojo:1:1,topic.OdeMapJson:1:1,topic.OdeRawEncodedSSMJson:1:1,topic.OdeSsmPojo:1:1,topic.OdeSsmJson:1:1,topic.OdeRawEncodedSRMJson:1:1,topic.OdeSrmTxPojo:1:1,topic.OdeSrmJson:1:1,topic.OdeRawEncodedPSMJson:1:1,topic.OdePsmTxPojo:1:1,topic.OdePsmJson:1:1"
KAFKA_CFG_DELETE_TOPIC_ENABLE: "true"
KAFKA_CFG_LOG_RETENTION_HOURS: 2
logging:
options:
max-size: "10m"
max-file: "5"


kafka_init:
image: bitnami/kafka:latest
depends_on:
kafka:
condition: service_started
volumes:
- ./scripts/kafka/kafka_init.sh:/kafka_init.sh
entrypoint: ["/bin/sh", "kafka_init.sh"]

ode:
build: .
image: jpoode_ode:latest
Expand All @@ -37,6 +44,12 @@ services:
- "9090:9090"
- "46753:46753/udp"
- "46800:46800/udp"
- "47900:47900/udp"
- "44900:44900/udp"
- "44910:44910/udp"
- "44920:44920/udp"
- "44930:44930/udp"
- "44940:44940/udp"
- "5555:5555/udp"
- "6666:6666/udp"
environment:
Expand All @@ -45,12 +58,18 @@ services:
ODE_SECURITY_SVCS_SIGNATURE_URI: ${ODE_SECURITY_SVCS_SIGNATURE_URI}
ODE_RSU_USERNAME: ${ODE_RSU_USERNAME}
ODE_RSU_PASSWORD: ${ODE_RSU_PASSWORD}
# Commented out for latest schemaVersion. Uncomment to set for older schemaVersion
# ODE_OUTPUT_SCHEMA_VERSION: ${ODE_OUTPUT_SCHEMA_VERSION}
DATA_SIGNING_ENABLED_RSU: ${DATA_SIGNING_ENABLED_RSU}
DATA_SIGNING_ENABLED_SDW: ${DATA_SIGNING_ENABLED_SDW}
DEFAULT_SNMP_PROTOCOL: ${DEFAULT_SNMP_PROTOCOL}
depends_on:
- kafka
volumes:
- ${DOCKER_SHARED_VOLUME}:/jpo-ode
- ${DOCKER_SHARED_VOLUME}/uploads:/home/uploads
logging:
options:
max-size: "10m"
max-file: "5"

adm:
build: ./asn1_codec
Expand Down
2 changes: 1 addition & 1 deletion sample.env
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ RDE_TIM_HEADER_X_API_KEY=
RDE_TIM_GROUP=group_rde_tim

#########################
# SDW Depositor Properties
# SDX Depositor Properties

## Required if using SDX depositor module (REST interface)
SDW_API_KEY=
Expand Down
2 changes: 1 addition & 1 deletion scripts/start-kafka-consumer.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ if [[ -z "$1" ]]; then
exit 1;
fi

$KAFKA_HOME/bin/kafka-console-consumer.sh --topic=$1 --bootstrap-server=`broker-list.sh`
$KAFKA_HOME/bin/kafka-console-consumer.sh --topic $1 --bootstrap-server `broker-list.sh`
2 changes: 1 addition & 1 deletion scripts/start-kafka-producer.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ if [[ -z "$1" ]]; then
exit 1;
fi

$KAFKA_HOME/bin/kafka-console-producer.sh --topic=$1 --broker-list=`broker-list.sh`
$KAFKA_HOME/bin/kafka-console-producer.sh --topic $1 --broker-list `broker-list.sh`

0 comments on commit 435db60

Please sign in to comment.