The AWS predictive maintenance solution for automotive fleets applies deep learning techniques to common areas that drive vehicle failures, unplanned downtime and repair costs. It serves as an initial building block for you to get to a proof-of-concept in a short period of time. This solution contains data preparation and visualization functionaility within Amazon SageMaker and allows you to train and optimize the hyperparameters of deep learning models for your dataset. You can use your own data or try the solution with a synthetic data set as part of this solution. This version processes vehicle sensor data over time. A subsequent version will process maintenance record data.
You will need an AWS account to use this solution. Sign up for an account here.
To run this JumpStart 1P Solution and have the infrastructure deploy to your AWS account you will need to create an active SageMaker Studio instance (see Onboard to Amazon SageMaker Studio). When your Studio instance is Ready, use the instructions in SageMaker JumpStart to 1-Click Launch the solution.
The solution artifacts are included in this GitHub repository for reference. Note: Solutions are available in most regions including us-west-2, and us-east-1.
cloudformation/
aws-fleet-predictive-maintenance.yaml
: Creates AWS CloudFormation Stack for solution.
docs/
- Contains images for documenting the solution.
sagemaker/
requirements.txt
: Describes Python package requirements of the Amazon SageMaker Notebook instance.1_introduction.ipynb
: Provides a high-level look at the solution components2_data_preparation.ipynb
: Prepares and/or generates a dataset for machine learning3_data_visualization
: Visualizes the input data4_model_training.ipynb
: Trains the model with Amazon SageMaker training jobs and Hypoerparameter Optimization jobs.5_results_analysis.ipynb
: Analyze the models trained and set up an Amazon SageMaker endpointconfig/
config.yaml
: Stores and retrieves project configuration.
data/
- Provides a location to store input and generated data
generation/
fleet_statistics.csv
: contains sample mean and std of sensor log data for different vehicles
source/
-
config/
:__init__.py
: Manages the config file
-
dataset/
:dataset_generator.py
: Generates a dataset based on the mean and std of sensor log data
-
dl_utils/
:dataset.py
: Contains a pytorch dataset for deep learninginference.py
: The entry point for the Amazon SageMaker Endpointnetwork.py
: Defines the neural network architecturerequirements.txt
: Provides the required packages for the Amazon SageMaker Endpointstratified_sampler.py
: Sampler for the pytorch dataset to create equal positive and negative samples. Obtained from torchsample
-
preprocessing/
:dataframewriter.py
: Helper class to preprocess data framespreprocessing.py
: Merges sensor and fleet information data
-
visualization/
:model_visualisation_utils.py
: Utilities to help visualize trained modelsplot_utils.py
: Utilities to visualize the input data
-
train.py
: Entry point for the Amazon SageMaker training job
-
As part of the solution, the following services are used:
- Amazon S3: Used to store datasets.
- Amazon SageMaker Notebook: Used to preprocess and visualize the data, and to train the deep learning model.
- Amazon SageMaker Endpoint: Used to deploy the trained model.
- An extract is created from the Fleet Management System containing vehicle data and sensor logs.
- Amazon SageMaker model is deployed after model is trained.
- Connected vehicle sends sensor logs to AWS IoT Core as shown (alternatively via HTTP interface).
- Sensor logs are persisted via Amazon Kinesis.
- Sensor logs are sent to AWS Lambda for analysis.
- AWS Lambda uses predictions model on sensor logs.
- Predictions are persisted in Amazon S3.
- Aggregate results are displayed on Amazon QuickSight dashboard.
- Real-time notifications are sent to Amazon SNS.
- Amazon SNS sends notifications back to connected vehicle.
You are responsible for the cost of the AWS services used while running this solution.
As of July 13th 2020 in the US West (Oregon) region, the cost to:
- train the model using Amazon SageMaker training job on ml.c5.xlarge is ~$0.02.
- host the model using Amazon SageMaker Endpoint on ml.c5.xlarge is $0.119 per hour.
- run an Amazon SageMaker notebook instance is $0.0582 per hour.
All prices are subject to change. See the pricing webpage for each AWS service you will be using in this solution.
When you've finished with this solution, make sure that you delete all unwanted AWS resources. AWS CloudFormation can be used to automatically delete all standard resources that have been created by the solution and notebook. Go to the AWS CloudFormation Console, and delete the parent stack. Choosing to delete the parent stack will automatically delete the nested stacks.
Caution: You need to manually delete any extra resources that you may have created in this notebook. Some examples include, extra Amazon S3 buckets (to the solution's default bucket) and extra Amazon SageMaker endpoints (using a custom name).
Our solution is easily customizable. You can customize the:
- Input data visualizations.
- Machine learning.
- Dataset processing.
- See
sagemaker/1_introduction.ipynb
on how to define the config file.
- See
Does the solution have a way to adapt to input data since every vehicle has different telemetry on it?
The solution is based on machine and deep learning models and allows a wide variety of input data including any time-varying sensor data. You can fine-tune the provided model to the frequency and type of data that you have.
The solution will support Amazon SageMaker Neo to deploy inference at the edge in real-time inside the vehicle.
- Amazon SageMaker Developer Guide
- Amazon SageMaker Python SDK Documentation
- AWS CloudFormation User Guide
This project is licensed under the Apache-2.0 License.
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.