Skip to content

Commit

Permalink
Sync develop to master (#258)
Browse files Browse the repository at this point in the history
  • Loading branch information
SaikrishnaBairamoni authored Sep 11, 2024
2 parents 8eacb18 + 30f6b00 commit c028674
Show file tree
Hide file tree
Showing 26 changed files with 586 additions and 333 deletions.
11 changes: 5 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,30 +3,29 @@
|-----|-----|-----|
[![Docker build](https://github.com/usdot-fhwa-stol/cda-telematics/actions/workflows/docker.yml/badge.svg?branch=develop)](https://github.com/usdot-fhwa-stol/cda-telematics/actions/workflows/docker.yml)| [![Docker build](https://github.com/usdot-fhwa-stol/cda-telematics/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/usdot-fhwa-stol/cda-telematics/actions/workflows/docker.yml)| [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=usdot-fhwa-stol_cda-telematics&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=usdot-fhwa-stol_cda-telematics)


# CDA-Telematics
This project will create an open-source Module that can be installed on any vehicle (e.g. a CARMA Platform and/or Messenger vehicle, an L0 or L1 production vehicle, etc.) that will collect data about the vehicle and wirelessly send it out in real time for data analysis. The same Module, with any modifications, if necessary, will also be compatible with CARMA Streets and CARMA Cloud. On the receiving end of this data, a user will have a Data Processing & Visualization Tool available to visualize and/or plot the data that was sent using the Module(s). This Module can be thought of as a Fleet Management tool with extra capabilities to support CDA research and education.

## Architecture Diagram
[Detailed Design](https://usdot-carma.atlassian.net/wiki/spaces/WFD2/pages/2230321179/Detailed+System+Design)

![architecture](https://user-images.githubusercontent.com/34483068/171265484-67177ebb-69f7-4286-9602-016043079958.png)

## Release Notes
The current version of CDA-Telematics tool and release history of the CARMA software platform: [CARMA Release Notes](<docs/Release_notes.md>)

## Documentation
Documentation of the setup, operation, and design of the CDA Telematics can be found on the project [Confluence](https://usdot-carma.atlassian.net/wiki/spaces/WFD2/overview) pages.
Documentation of the setup, operation, and design of the CDA Telematics can be found on the project [Confluence](https://usdot-carma.atlassian.net/wiki/spaces/WFD2/overview) pages.


## Contribution
Welcome to the CDA Telematics contributing guide. Please read this guide to learn about our development process, how to propose pull requests and improvements, and how to build and test your changes to this project. [CDA Telematics Contributing Guide](Contributing.md)
Welcome to the CDA Telematics contributing guide. Please read this guide to learn about our development process, how to propose pull requests and improvements, and how to build and test your changes to this project. [CDA Telematics Contributing Guide](Contributing.md)

## Code of Conduct
## Code of Conduct
Please read our [CDA Telematics Code of Conduct](Code_of_Conduct.md) which outlines our expectations for participants within the developer community, as well as steps to reporting unacceptable behavior. We are committed to providing a welcoming and inspiring community for all and expect our code of conduct to be honored. Anyone who violates this code of conduct may be banned from the community.

## Attribution
The development team would like to acknowledge the people who have made direct contributions to the design and code in this repository. [CDA Telematics Attribution](ATTRIBUTION.md)
The development team would like to acknowledge the people who have made direct contributions to the design and code in this repository. [CDA Telematics Attribution](ATTRIBUTION.md)

## License
By contributing to the Federal Highway Administration (FHWA) CDA Telematics repository, you agree that your contributions will be licensed under its Apache License 2.0 license. [CDA Telematics License](<docs/License.md>)
Expand Down
2 changes: 1 addition & 1 deletion telematic_system/docker-compose.cloud.servers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ services:
command: bash -c 'wait-for-it localhost:4222 && java -jar /telematic_cloud_messaging/app.jar'
env_file:
- .env

rosbag2_processing_service:
build:
context: ./telematic_historical_data_processing
Expand Down
2 changes: 1 addition & 1 deletion telematic_system/docker-compose.local.yml
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ services:
build:
context: "./telematic_cloud_messaging"
container_name: messaging_server
image: usdotfhwastoldev/telematic_local_messaging:develop
image: usdotfhwastoldev/telematic_cloud_messaging:develop
depends_on:
- nats
- mysqldb
Expand Down
8 changes: 4 additions & 4 deletions telematic_system/docker-compose.webapp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ services:
context: "./telematic_apps/web_app/server"
restart: always
container_name: web_server
image: usdotfhwastoldev/telematic_web_server:develop
image: usdotfhwastoldevdev/telematic_web_server:develop
logging:
options:
max-size: "10m"
Expand Down Expand Up @@ -35,11 +35,11 @@ services:
- UPLOAD_HTTP_PORT=9011
- UPLOAD_TIME_OUT=3600000 # Milliseconds
- UPLOAD_MAX_FILE_SIZE=21474836480 #20 GB
- CONCURRENT_QUEUE_SIZE=5
- PART_SIZE=10485760
- CONCURRENT_QUEUE_SIZE=5
- PART_SIZE=10485760
- NATS_SERVERS=<NATS_IP>:4222
- FILE_PROCESSING_SUBJECT=ui.file.processing
- FILE_EXTENSIONS=.mcap
- FILE_EXTENSIONS=.mcap
command: bash -c '/app/service.sh'
volumes:
- /opt/apache2/grafana_htpasswd:/opt/apache2/grafana_htpasswd
Expand Down
147 changes: 147 additions & 0 deletions telematic_system/scripts/log_analysis/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Prerequisite
- Preferred operating system ubuntu 20 or above
- Python environment setup
1. Install python
```
sudo apt update
sudo apt install python3
```
2. Check python version
```
python3 --version
```
Recommended version is `3.10`
3. Create a virtual environment. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory, and run below command:
```
python3 -m venv .venv
```
4. Activate virtual environment.
```
.venv\bin\activate
```
Note: Need to run this command to activate virtual environment every time openning a new terminal.
- Install depedencies:
- Install debian packages
```
sudo apt install libcairo2-dev libxt-dev libgirepository1.0-dev

```
- Install python packages
```
pip install -r requirements.txt
```
- Clone repos:
- Clone cda-telematics GitHub repos
```
git clone https://github.com/usdot-fhwa-stol/cda-telematics.git
cd cda-telematics
```
- Download `log_timesheet.csv`
Most of the python analysis scripts refer to `log_timesheet.csv` for test runs and their duration. Since this `log_timesheet.csv` is generated during the verification/validation testing, ensure download the `log_timesheet.csv` file to this `log_analysis` folder before executing any python scripts.


# Process V2xHub bridge log
1. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory
2. Download v2xhub logs to the current folder.
3. Run command to generate data publishing metrics.
```
python3 parse_v2xhub_telematic_plugin_logs.py --log_file_path <input-file-name>
e.g:
python3 parse_v2xhub_telematic_plugin_logs.py --log_file_path T20_R6-13_V2XHub.log
```
It will generate parsed bridge log in csv files.

# Process Streets bridge log
1. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory
2. Download streets bridge logs to the current folder.
3. Run command to generate data publishing metrics.
```
python3 parse_streets_bridge_logs.py <path-to-log-file>
```
It will generate parsed bridge log in csv files.

# Process Cloud bridge log
1. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory
2. Download streets bridge logs to the current folder.
3. Run command to generate data publishing metrics.
```
parse_cloud_bridge_logs.py <path-to-log-file>
e.g:
python3 parse_cloud_bridge_logs.py T20_R6-9_carma_cloud.log
python3 parse_cloud_bridge_logs.py T20_R10-13_carma_cloud.log
```
It will generate parsed bridge log in csv files.

# Process Vehicle bridge log
1. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory
2. Download vehicle bridge logs to the current folder.
3. Run command to generate data publishing metrics.
```
python3 parse_vehicle_bridge_logs.py <path-to-log-file>
e.g:
python3 parse_vehicle_bridge_logs.py T20_R6_R13_fusion/T20_R6_fusion.log
```
It will generate parsed bridge log in csv files.

# Process Messaging Server log
1. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory
2. Download messaging server logs to the current folder.
3. Run command to generate data publishing metrics.
```
parse_messaging_server_logs.py <path-to-log-file>
e.g:
python3 parse_messaging_server_logs.py T20_R6-13_messaging_server.log
```
It will generate parsed messaging server delay and message drop log in csv files.

# Metric analysis
## Latency
1. Create a folder with the test case name in the current `log_analysis` folder.
For example, test case 20:
```
mkdir T20
```
2. Copy all the generated T20_*_messaging_server_*_delay_parsed.csv files to this new folder `T20`
3. Run plot latency script to generate plots for those csv files with delay metrics in folder `T20`.
```
python3 latencyPlotter.py <folder-name or test case name>
e.g:
python3 latencyPlotter.py T20
```
The generated plots are saved into `output` folder.
## Message loss
1. Create a folder with the test case name and message drop in the current `log_analysis` folder.
For example, test case 20:
```
mkdir T20_message_drop

```
2. Copy all generated <test case name>_*_messaging_server_*_message_drop_parsed.csv files to this new folder `<test case name>_message_drop`.
3. Copy all generated bridge csv files into the same folder
4. Run message drop analysis script to analyze all files in the `<test case name>_message_drop` folder.
```
python3 get_message_drop.py <folder-name or test case name>_message_drop
e.g:
python3 get_message_drop.py T20_message_drop
```
Generated result is similar to below:
<br>
![Message_loss_result](https://github.com/user-attachments/assets/15fefacb-e929-4340-a0e3-6d7f6441ba8e)

## Rosbag Processing time
1. Navigate to `cda-telematics/telematic_system/scripts/log_analysis` directory
2. Download historical data processing service logs to the current folder.
3. Run command to generate data publishing metrics.
```
parse_processing_service_logs.py <path-to-log-file>
e.g:
python3 parse_processing_service_logs.py T19_R1_R5_rosbag2.log
```
It will print the time required to process each rosbag .mcap file and the average time required for all the files in the log.
74 changes: 23 additions & 51 deletions telematic_system/scripts/log_analysis/get_message_drop.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,17 @@

import matplotlib.dates as mdates
import matplotlib.pyplot as plt

import os
warnings.filterwarnings("ignore")

'''
This script combines bridge logs with the messaging server logs to give the number of dropped messages from each unit.
Input: The script looks within the the argument directory for csv files from Messaging Server, Vehicle Bridge, Streets Bridge and Cloud Bridge log
Input: The script looks within the the argument directory for csv files from Messaging Server, Vehicle Bridge, Streets Bridge and Cloud Bridge log
,which are parsed log output from the bridges, to calculate the number of dropped messages from each unit.
Required Input File Format:
The csv files to be read currently need to follow a specific format.
Required Input File Format:
The csv files to be read currently need to follow a specific format.
The messaging server parsed csv needs to start with the word "Messaging" separated by underscores
Streets bridge parsed csv file name needs to start with the word Streets separated by underscores(_)
Vehicle bridge parsed csv file name needs to start with the word Vehicle or BlueLexus or Fusion separated by underscores(_)
Expand All @@ -32,32 +32,31 @@ def combineFiles(log_dir):
path_obj = Path(log_dir)
print(log_dir)
filenames = [ f.name for f in path_obj.glob('*.csv')]

bridge_csv_exist = False
bridge_csv_regex = r'.*(Streets|Vehicle|BlueLexus|Fusion|V2xHub|Cloud).*'
bridge_csv_regex = r'.*(Streets|Vehicle|BlueLexus|Fusion|V2xHub|Cloud|Ros2).*'
bridges_csv = []

messaging_server_csv_exist = False
messaging_server_csv = ""
messaging_server_csv = []

for filename in filenames:
if "Messaging" in filename:
for filename in filenames:
if "messaging" in filename.lower():
messaging_server_csv_exist = True
messaging_server_csv = log_dir + "/" + filename
messaging_server_csv.append(log_dir + "/" + filename)

matched = re.match(bridge_csv_regex, filename, re.IGNORECASE)
if matched:
bridges_csv.append(log_dir + "/" + filename)
bridge_csv_exist = True

if not bridge_csv_exist:
sys.exit("Did not find any Vehicle/Streets/Cloud/BlueLexus/Fusion/V2xHub bridge csv logs in directory: " +log_dir+ "")

if not messaging_server_csv_exist:
sys.exit("Did not find any Messaging server csv logs in directory: "+log_dir+ "")


messaging_server_df = pd.read_csv(messaging_server_csv)
messaging_server_df = pd.concat(map(pd.read_csv, messaging_server_csv), ignore_index=True)
infrastructure_units = ['streets_id', 'cloud_id']

############# Load messaging server logs and get a list of dataframes for all unit ids
Expand All @@ -69,62 +68,35 @@ def combineFiles(log_dir):
if key not in infrastructure_units:
value = value[~value['Message Time'].isnull()]
# value = value.drop('Metadata',axis =1)


#Get dataframes from bridge logs
bridge_dfs = dict()
for bridge_csv in bridges_csv:
bridge_df = pd.read_csv(bridge_csv)
bridge_dfs.update(dict(tuple(bridge_df.groupby('Unit Id'))))

print(bridge_dfs.keys())

bridge_df = pd.concat(map(pd.read_csv, bridges_csv), ignore_index=True)
bridge_dfs = dict(tuple(bridge_df.groupby('Unit Id')))

# Create combined dataframes from

# Create combined dataframes from
for key in bridge_dfs:
if key in messaging_server_dfs:

bridge_df_combined = pd.merge(bridge_dfs[key], messaging_server_dfs[key], how='left', left_on=['Topic','Payload Timestamp'], right_on = ['Topic','Message Time'])
bridge_df_combined.to_csv(log_dir + key + "_combined.csv")
if not os.path.exists("output"):
os.mkdir("output")
bridge_df_combined.to_csv("output/"+log_dir+"_"+ key + "_combined.csv")

bridge_missing_message_count = bridge_df_combined['Log_Timestamp(s)'].isnull().sum()
bridge_total_message_count = len(bridge_df_combined['Payload Timestamp'])
print("Message drop for unit: ", key)
print("\nMessage drop for unit: ", key)
print("Missing count: ", bridge_missing_message_count)
print("Total count: ", bridge_total_message_count)
print("Percentage of messages received",(1 - (bridge_missing_message_count/bridge_total_message_count))*100)


topics_with_empty_count = (bridge_df_combined['Message Time'].isnull().groupby([bridge_df_combined['Topic']]).sum().astype(int).reset_index(name='count'))
topics_with_empty_count = topics_with_empty_count.loc[~(topics_with_empty_count['count']==0)]

print("{} missed messages: ".format(key))
print(topics_with_empty_count)

# Plot vehicle data
bridge_df_combined = bridge_df_combined[bridge_df_combined['Message Time'].isnull()]
bridge_df_combined['Payload Timestamp'] = pd.to_datetime(bridge_df_combined['Payload Timestamp'], infer_datetime_format=True)
bridge_df_combined['Message Time'] = pd.to_datetime(bridge_df_combined['Message Time'], infer_datetime_format=True)


ax1 = plt.plot(bridge_df_combined['Topic'], bridge_df_combined['Payload Timestamp'], '|')

#Plot start and end lines
start_time = pd.to_datetime(messaging_server_dfs[key]['Log_Timestamp(s)'].iloc[0])
end_time = pd.to_datetime(messaging_server_dfs[key]['Log_Timestamp(s)'].iloc[-1])

plt.axhline(y = start_time, color = 'r', linestyle = '-', label = 'Test Start Time')
plt.axhline(y = end_time, color = 'r', linestyle = '-', label = 'Test End Time')

plt.title('{} : Topics against time of dropped message'.format(key))
plt.xlabel('Topics with dropped messages hours:mins:seconds')
plt.ylabel('Time of message drop')
xfmt = mdates.DateFormatter('%H:%M:%S')
plt.gcf().autofmt_xdate()
plt.show()
# plt.savefig('{}_Message_drop.png'.format(key))






Expand Down
Loading

0 comments on commit c028674

Please sign in to comment.