Skip to content

Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise.

Notifications You must be signed in to change notification settings

MrinmoiHossain/Udacity-Intel-Edge-AI-for-IoT-Developers-Nanodegree

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intel® Edge AI for IoT Developers Nanodegree Program

N.B.: Please don't use the assignment and quiz solution. Try to solve the problem by yourself.


Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise. You will identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU), and utilize the Intel® DevCloud for the Edge to test model performance on the various hardware types. Finally, you will use software tools to optimize deep learning models to improve performance of Edge AI systems. - Source

Requirements

Core Curriculum

1. Welcome to the Program

Lesson-1: Nanodegree Program Introduction

No Lesson Notes Link/Source
1 Welcome to Udacity Welcome note, Technology evolution ------/------
2 Welcome to the Nanodegree Program Experience Udacity mentor supprot, Helping tools ------/------
3 How to Succeed Introduction of instructor, Goals (short or long term), Accountability, Learning strategies, Technical advice ------/------
4 Welcome to Intel® Edge AI for IoT Developers Design, test, and deploy an edge AI application ------/------
5 Prerequisites & Other Requirements Python, Training and deploying deep learning models, Draining and deploying deep learning models, CLI, OpenCV ------/------
6 Notebooks and Workspaces Jupyter Notebooks, Jupyter Graffiti ------/------
7 Graffiti Tutorial Graffiti Tutorial ------/------

2. Edge AI Fundamentals with OpenVINO

Lesson-1: Introduction to AI at the Edge

No Lesson Notes Link/Source
1 Instructor Intro Instructor pathway & introduction ------/------
2 What is AI at the Edge? Edge means local (or near local) processing, Less impact on a network ------/------
3 Why is AI at the Edge Important? Network communication, Real-time processing, Sensitive data, Optimization software ------/------
4 Applications of AI at the Edge Endless possibilities, IoT devices, Self driving, Animal tracking ------/------
5 Historical Context Historical background edge application [1]
6 Course Structure Pre-trained models, Model optimizer, Inference engine, Deploying at the edge(handling input streams, processing model outputs, MQTT) [2]
7 Why Are the Topics Distinct? Train a model->Model optimizer->IR format->Inference engine->Edge application ------/------
8 Relevant Tools and Prerequisites Basics of computer vision and how AI models, Python or C++, Hardware & software requiremnts [3 - 5]
9 What You Will Build Build and deploy a People Counter App at the Edge ------/------
10 Recap Basics of the edge, Importance of the edge and its history, Edge application ------/------

Lesson-2: Leveraging Pre-Trained Models

No Lesson Notes Link/Source
1 Introduction Lesson objective ------/------
2 The OpenVINO™ Toolkit An open source library useful for edge deployment due to its performance maximizations and pre-trained models ------/------
3 Pre-Trained Models in OpenVINO™ Model Zoo, in which the Free Model Set contains pre-trained models already converted using the Model Optimize [6]
4 Types of Computer Vision Models Classification, Detection, and Segmentation etc. [7]
5 Case Studies in Computer Vision SSD, ResNet and MobileNet ------/------
6 Available Pre-Trained Models in OpenVINO™ Public Model Set, Free Model Set [6]
7 Exercise: Loading Pre-Trained Models Find the Right Models, Download the Models, Verify the Downloads ------/------
8 Solution: Loading Pre-Trained Models Choosing Models, Downloading Models, Verifying Downloads ------/------
9 Optimizations on the Pre-Trained Models Dealt with different precisions of the different models ------/------
10 Choosing the Right Model for Your App Try out different models for the application and a single use case ------/------
11 Pre-processing Inputs Check out in any related documentation, Check color chanel, Input and output parameters ------/------
12 Exercise: Pre-processing Inputs Build preprocess_input file for processing the inputs parameters ------/------
13 Solution: Pre-processing Inputs Solution of pre-processing inputs ------/------
14 Handling Network Outputs Try out different models for the application and a single use case [8 - 9]
15 Running Your First Edge App Load a pre-trained model into the Inference Engine, as well as call for functions to preprocess and handle the output in the appropriate locations ------/------
16 Exercise: Deploy An App at the Edge Implement the handling of the outputs of our three models ------/------
17 Solution: Deploy An App at the Edge Car Meta Model, Pose Estimation, Text Detection Model Output Handling ------/------
18 Recap Basics of the Intel® Distribution of OpenVINO™ Toolkit, Different CV model types, Available Pre-Trained Model, Choosing the right Pre-Trained Model ------/------
19 Lesson Glossary Basics of the edge, Importance of the edge and its history, Edge application ------/------

Lesson-3: The Model Optimizer

No Lesson Notes Link/Source
1 Introduction Basics of the Model Optimizer, Optimization techniques & impact, Supported Frameworks, Custom layers ------/------
2 The Model Optimizer The model optimizer process, Local Configuration [10]
3 Optimization Techniques Quantization, Freezing, Fusion [11 - 12]
4 Supported Frameworks Caffe, TensorFlow, MXNet, ONNX, Kaldi [13 - 17]
5 Intermediate Representations OpenVINO™ Toolkit’s standard structure and naming for neural network architectures, XML file and a binary file [18 - 20]
6 Using the Model Optimizer with TensorFlow Models Using the Model Optimizer with TensorFlow Models [21 - 22]
7 Exercise: Convert a TF Model Excercise on convert a tf model ------/------
8 Solution: Convert a TF Model Solution of convert a tf model ------/------
9 Using the Model Optimizer with Caffe Models Nothing about freezing the model, Need to feed both the .caffemodel file, as well as a .prototxt file [23]
10 Exercise: Convert a Caffe Model Excercise on convert a caffe model ------/------
11 Solution: Convert a Caffe Model Solution of convert a caffe model ------/------
12 Using the Model Optimizer with ONNX Models Model Optimizer with ONNX Models, PyTorch to ONNX [24 - 26]
13 Exercise: Convert an ONNX Model Excercise on covert an ONNX model ------/------
14 Solution: Convert an ONNX Model Solution of convert and ONNX model ------/------
15 Cutting Parts of a Model Mostly applicable for TensorFlow models, Two main command line arguments to use for cutting a model: --input and --output [27]
16 Supported Layers Supported and unsupported layers [28]
17 Custom Layers Register the custom layers [29 - 30]
18 Exercise: Custom Layers Example Custom Layer: The Hyperbolic Cosine (cosh) Function ------/------
19 Recap Basics of the Model Optimizer, Optimization techniques & impact, Supported Frameworks, Custom layers ------/------
20 Lesson Glossary Short note of the lesson ------/------

Lesson-4: The Inference Engine

No Lesson Notes Link/Source
1 Introduction Inference Engine, Supported device, Feeding an Intermediate Representation to the Inference Engine, Making Inference Requests, Handling Results ------/------
2 The Inference Engine Runs the actual inference on a model, Only works with the Intermediate Representations [31]
3 Supported Devices CPUs, including integrated graphics processors, GPUs, FPGAs, and VPUs [32 - 33]
4 Using the Inference Engine with an IR IECore, IENetwork, Check Supported Layers, CPU extension [34 - 36]
5 Exercise: Feed an IR to the Inference Engine Exercise on feed an IR to the inference engine ------/------
6 Solution: Feed an IR to the Inference Engine Solution of feed an IR to the inference engine ------/------
7 Sending Inference Requests to the IE ExecutableNetwork, Two types of inference requests: Synchronous and Asynchronous [37 - 38]
8 Asynchronous Requests Synchronous: only one frame is being processed at once, Asynchronous: other tasks may continue while waiting on the IE to respond [39 - 41]
9 Exercise: Inference Requests Inference requests (asynchronous & synchronous) excercise ------/------
10 Solution: Inference Requests Synchronous and Asynchronous Solution ------/------
11 Handling Results InferRequest attributes - namely, inputs, outputs and latency [42]
12 Integrating into Your App Adding some further customization to your app [43 - 45]
13 Exercise: Integrate into an App Excercise on integrate into an App ------/------
14 Solution: Integrate into an App Solution of integrate into an App ------/------
15 Behind the Scenes of Inference Engine Inference Engine is built and optimized in C++, The exact optimizations differ by device with the Inference Engine [46 - 47]
16 Recap Inference Engine, Supported device, Feeding an Intermediate Representation to the Inference Engine, Making Inference Requests, Handling Results ------/------
17 Lesson Glossary Short note of the lesson ------/------

Lesson-5: Deploying an Edge App

No Lesson Notes Link/Source
1 Introduction OpenCV, Input Streams in OpenCV, Processing Model Outputs for Additional Useful Information, MQTT and their use with IoT devices, Performance basics ------/------
2 OpenCV Basics Uses of OpenCV, Useful OpenCV function: VideoCapture, resize, cvtColor, rectangle, imwrite [48]
3 Handling Input Streams Open & Read A Video, Closing the Capture ------/------
4 Exercise: Handling Input Streams Handle image, video or webcam, resize, Add Canny Edge Detection to the frame, Write out the frame ------/------
5 Solution: Handling Input Streams Handle image, video or webcam, resize, Add Canny Edge Detection to the frame, Write out the frame ------/------
6 Gathering Useful Information from Model Outputs Information from one model could even be further used in an additional model ------/------
7 Exercise: Process Model Outputs Excercise on model outputs process ------/------
8 Solution: Process Model Outputs Solution of model outputs process ------/------
9 Intro to MQTT Stands for MQ Telemetry Transport, Lightweight publish/subscribe architecture, Port 1883 [49 - 50]
10 Communicating with MQTT MQTT Python library: paho-mqtt, Publishing or subscribing parameters [51 - 52]
11 Streaming Images to a Server FFmpeg (“fast forward” MPEG), Setting up FFmpeg, Sending frames to FFmpeg [53 - 55]
12 Handling Statistics and Images from a Node Server Node server can be used to handle the data coming in from the MQTT and FFmpeg servers [56]
13 Exercise: Server Communications Excercise on server communication using node.js and mqtt ------/------
14 Solution: Server Communications Solution of server communication using node.js and mqtt ------/------
15 Analyzing Performance Basics Not to skip past the accuracy of your edge AI model, Lighter, quicker models, Lower precision [57]
16 Model Use Cases Figure out additional use cases for a given model or application [58]
17 Concerning End User Needs Consider the project needs ------/------
18 Recap OpenCV, Input Streams in OpenCV, Processing Model Outputs for Additional Useful Information, MQTT and their use with IoT devices, Performance basics ------/------
19 Lesson Glossary Short note of the lesson ------/------
20 Course Recap Basics of AI at the Edge, Pre-trained models, the Model Optimizer, Inference Engine, Deploying an app at the edge [59]
21 Partner with Intel Benefits of partner with Intel ------/------

3. Choosing the Right Hardware

Lesson-1: Introduction to Hardware at the Edge

No Lesson Notes Link/Source
1 Instructor Introduction Instructor - Vaidheeswaran Archana, Stewart Christie ------/------
2 Course Overview Right hardware for a particular need, Four main hardware types [60]
3 Changes in OpenVINO 2020.1 Differences between 2020.1 vs. the older 2019 R3.1 version [61]
4 Lesson Overview Choosing the right hardware, Product development flow, Basic terminology, Using Intel DevCloud ------/------
5 Why is Choosing the Right Hardware Important? Project steps for a edge application from begin to deploy ------/------
6 Design of Edge AI Systems Stakeholders in designing Edge AI systems, Basic approach to developing a product ------/------
7 Analyze Get all of the requirements and constraints involved ------/------
8 Design The different components and how the data will flow from one to the next ------/------
9 Develop Hardware & Software prototype development ------/------
10 Test and Deploy Tested the design and selected the hardware, Simulated edge device, Debugging ------/------
11 Basic Terminology CPU, GPU, AI Accelerators, VPU, NCS-2, FPGA, HDDL/VAD, TDP, Hetero, SYNC & ASYNC mode ------/------
12 Intel DevCloud High-level overview of the Intel DevCloud for the Edge [60]
13 Updating Your Workspace Getting New Content & Resetting Data option in workspace ------/------
14 Walkthrough: Using Intel DevCloud Showing how to request an edge node with an Intel i5 CPU and load a model on the CPU ------/------
15 Exercise: Using Intel DevCloud Excercise on Intel DevCloud - load model, inference run, queue job ------/------
16 Lesson Review Choosing the right hardware, Product development flow, Basic terminology, Using Intel DevCloud ------/------

Lesson-2: CPUs and Integrated GPUs

No Lesson Notes Link/Source
1 Lesson Overview CPUs and integrated GPUs overview ------/------
2 CPU Basics Basic description about - CPU, Multiple Cores, IGPU ------/------
3 Threads and Processes Basic description about - Threads, Processes ------/------
4 Multithreading and Multiprocessing Choosing the right hardware, Product development flow, Basic terminology, Using Intel DevCloud [62 - 63]
5 Introduction to Intel Processors Different types of processors, Cost and performance across different processor types, Thermal Design Power (TDP) [64]
6 Intel CPU Architecture Key characteristics of Intel processors, Compatibility, Multicore, Hyperthreading, Instruction Sets ------/------
7 CPU Specifications (Part 1) Clock Speed, Number of Cores ------/------
8 CPU Specifications (Part 2) Important factors to consider when looking at CPU specifications ------/------
9 Exercise: CPU Scenario Case study in CPU Scenario [65]
10 Updating Your Workspace Getting new content, Resetting data ------/------
11 Walkthrough: CPU and the DevCloud Walkthrough on the CPU and the DevCloud ------/------
12 Exercise: CPU and the Devcloud Exercise on the CPU and the Devcloud ------/------
13 Integrated GPU (IGPU) Execution Units, Slice, Unslice, Key Characteristics of Integrated GPUs ------/------
14 Walkthrough: IGPU and the DevCloud Walkthrough on the IGPU and the DevCloud ------/------
15 IGPU and Batch Processing Relation between IGPU and batch processing ------/------
16 Exercise: IGPU Scenario Real case study IGPU scenario ------/------
17 Exercise: IGPU and the DevCloud Exercise on the IGPU and the DevCloud ------/------
18 Lesson Review Intel processors, Intel architecture, Application of CPU, DevCloud, IGPU ------/------

Lesson-3: VPUs

No Lesson Notes Link/Source
1 Lesson Overview Architecture of VPUs, Characteristics of a particular VPU, Neural Compute Stick 2, Multi-Device Plugin ------/------
2 Introduction to VPUs Small, low-cost, low-power devices that can dramatically improve the performance of a system ------/------
3 Architecture of VPUs Interface unit, Imaging accelerators, Neural compute engine, Vector processors, On-chip CPUs ------/------
4 Myriad X Characteristics Neural compute engine, Imaging accelerators, Imaging/Hardware Accelerators, On-chip memory, Vector processors, Energy consumption [66]
5 Intel Neural Compute Stick 2 Key features of the Intel NCS2, FPS vs. Power Tradeoff ------/------
6 Exercise: VPU Scenario Case study on VPU scenario ------/------
7 Updating Your Workspace Getting new content, Resetting data ------/------
8 Walkthrough: VPU and the DevCloud Walkthrough on the VPU and the DevCloud ------/------
9 Exercise: VPU and the DevCloud Exercise on the VPU and the DevCloud ------/------
10 Multi-Device Plugin About multi-device plugin and it's benefits ------/------
11 Walkthrough: Multi-Device Plugin and the DevCloud Walkthrough on the CPU and the DevCloud ------/------
12 Exercise: Multi Device Plugin on DevCloud Exercise on the CPU and the Devcloud ------/------
13 Lesson Review Architecture of VPUs, Characteristics of a particular VPU, Neural Compute Stick 2, Multi-Device Plugin ------/------

Lesson-4: FPGAs

No Lesson Notes Link/Source
1 Lesson Overview Architecture of FPGAs, Programming FPGAs, Turning Your FPGA into an AI Accelerator, Specifications of FPGAs ------/------
2 Introduction to FPGAs Basic description about ASICs, FPGAs ------/------
3 Architecture of FPGAs The components of a tile or Adaptive Logic Module (ALM) ------/------
4 Programming FPGAs Different ways to program an FPGA, Register Transfer Level (RTL), Converting an FPGA into an AI Accelerator ------/------
5 FPGA Specifications High performance, Low latency, Flexibility, Large Networks, Robust, Long Lifespan ------/------
6 Intel Vision Accelerator Design Important consideration to run inference on one using the Intel DevCloud ------/------
7 Exercise: FPGA Scenario Case study on FPGA scenario ------/------
8 Updating Your Workspace Getting new content, Resetting data ------/------
9 Walkthrough: FPGA and the DevCloud Walkthrough on the FPGA and the DevCloud ------/------
10 Exercise: FPGA and the DevCloud Exercise on the FPGA and the DevCloud ------/------
11 Heterogeneous Plugin Heterogeneous (HETERO) plugin benefits [67 - 68]
12 Exercise: Heterogeneous Plugin on DevCloud Exercise on the Heterogeneous Plugin on DevCloud ------/------
13 Lesson Review Architecture of FPGAs, Programming FPGAs, Turning Your FPGA into an AI Accelerator, Specifications of FPGAs ------/------
14 Course Review CPU, GPU, VPU, FPGA, Intel DevCloud ------/------

4. Optimization Techniques and Tools

Lesson-1: Introduction to Software Optimization

No Lesson Notes Link/Source
1 Instructor Introduction Instructor - Soham Chatterjee ------/------
2 Course Overview Performance Metrics, Reducing Model Operations, Reducing Model Size, Other Optimization Techniques ------/------
3 Installing OpenVINO OpenVINO installation instruction ------/------
4 Lesson Overview Lesson objective ------/------
5 What is Software Optimization and Why Does it Matter? Hardware optimization, Software optimization ------/------
6 Types of Software Optimization Reduce the size of the model, Reduce the number of operations ------/------
7 Performance Metrics Software Performance, Hardware Performance, Recall, Precision, Latency and Throughput ------/------
8 Some Other Performance Metrics FLOPs, FLOPS, MACs ------/------
9 When do we do Software Optimization? Scenario based software optimization [69]
10 Lesson Review Performance Metrics, Reducing Model Operations, Reducing Model Size, Other Optimization Techniques ------/------

Lesson-2: Reducing Model Operations

No Lesson Notes Link/Source
1 Lesson Overview How to measure the performance of our model (FLOPs and MACs performance metric), How to use efficient layers in our model, How to use OpenVINO to measure the layerwise performance, Model pruning ------/------
2 Calculating Model FLOPs: Dense Layers Calculating MACs and FLOPs number in Dense Layers ------/------
3 Calculating Model FLOPS: Convolutional Layers Convolutional Layers (Kernal, output size), Calculating MACs and FLOPs, FLOPS in convolutional layers [70]
4 Calculate the FLOPs in a model Calculate the FLOPs in a model (CNN, FC layers) ------/------
5 Using Efficient Layers: Pooling Layers Pooling layers, Types of pooling layers, Calculate the FLOPs in pooling layers ------/------
6 Exercise: Pooling Performance Exercise on the pooling performance ------/------
7 Using Efficient Layers: Separable Convolutions Separable Convolutions - Depthwise layer, Pointwise layer ------/------
8 Exercise: Separable Convolutions Performance Exercise on Separable Convolutions Performance ------/------
9 Measuring Layerwise Performance Measuring Layerwise Performance ------/------
10 Exercise: Measuring Layerwise Performance Exercise on the measuring layerwise performance ------/------
11 Model Pruning Three basic steps in model pruning ------/------
12 Lesson Review How to measure the performance of our model (FLOPs and MACs performance metric), How to use efficient layers in our model, How to use OpenVINO to measure the layerwise performance, Model pruning ------/------

Lesson-3: Reducing Model Size

No Lesson Notes Link/Source
1 Lesson Overview Quantization, DL Workbench, Weight Sharing, Knowledge Distillation ------/------
2 Introduction to Quantization Quantization, Affine transformation, Binary and Tenary NN [71]
3 Benchmarking Model Performance Install and run DL Workbench [72]
4 Exercise: Benchmarking Model Performance Exercise on benchmarking model performance [73]
5 Advanced Benchmarking Perform many tests with various configurations of batch size and streams ------/------
6 Exercise: Advanced Benchmarking Run a range of performance benchmarks on a model of our choice ------/------
7 How Quantization is Done Weight quantization, Procedure and formula of quantization [74]
8 Quantizing a Model using DL Workbench Using the DL Workbench to quantize a model [75 - 76]
9 Exercise: Quantizing a Model Using DL Workbench Exercise on quantizing a model using DL Workbench ------/------
10 Model Compression Model compression techniques ------/------
11 Knowledge Distillation Transfer the knowledge learned by a large, accurate model (the teacher model) to a smaller and computationally less expensive model (the student model) ------/------
12 Lesson Review Quantization, DL Workbench, Weight Sharing, Knowledge Distillation ------/------

Lesson-4: Other Optimization Tools and Techniques

No Lesson Notes Link/Source
1 Lesson Overview Measuring latencies in our code, Using VTune Amplifier, Packaging our application ------/------
2 Introduction to Intel VTune Install and run Intel VTune [77 - 78]
3 Exercise: Profiling Using VTune Exercise on profiling using VTune ------/------
4 Advanced Concepts in Intel VTune Drill down further and see which lines in your application are the bottlenecks ------/------
5 Exercise: Advanced Profiling Using VTune Amplifier Exercise on advanced profiling using VTune Amplifier ------/------
6 Packaging Your Application Packaging our application ------/------
7 Exercise: Packaging Your Application Exerciseon packaging our application ------/------
8 Exercise: Deploying Runtime Package Exercise on deploying runtime package ------/------
9 Lesson Review Measuring latencies in our code, Using VTune Amplifier, Packaging our application ------/------
10 Course Review Software optimization, Performance Metrics, Reducing Model Parameters ------/------

Project

Resources

About

Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published