Demo of this Github at https://hpssjellis.github.io/jeremy-ellis-tinyML-teacher-feedback-2022/
This Github Repository at https://github.com/hpssjellis/jeremy-ellis-tinyML-teacher-feedback-2022
To make your own version of this web-app just copy this README.md file in your own Gitpages activated repository (makes it act as a webpage) and link to the README.md file as the main webpage (it replaces an index.html file).
Number of Slides: , Seconds per Slide:
Follow with this link hpssjellis.github.io/jeremy-ellis-tinyML-teacher-feedback-2022/
I am Jeremy Ellis, Twitter @rocksetta, Github Profile, www.rocksetta.com, Youtube Rocksetta Playlist
I have a gift for academic creativity and the ability to teach complex things, that I understand, to young people! The problem is getting me to understand the complex things! I am still a bit stuck on learning Quantum Computing, my Github about Quantum is here.
I presently teach: 3D Printing, Animation, Game Development (Coding) and Robotics with Machine Learning
I strongly feel university undergrads of all disciplines should have some form of hands on machine learning and robotics before graduation. Unfortunately, simplification often results in a loss of big picture knowledge and capability.
So the challenge is how to simplify ML without losing computing flexibility.
About 1976, when I was in Grade 8, I taught myself how to computer program on both the HP67 and HP97 Calculators. Since then, I have had no formal machine learning training, but was writing neural networks using Pascal in the early 1990s (simple multi layer array "nodes", holding an integer between -1 and 1, interconnected to all nodes of the next layer, all incoming amounts for each node were summed and then fired if the sum was above zero). My NN's never worked but unfortunately only oscillated between classification.
Not until Tensorflow was released about 2015 was I able to teaching ML. I have been teaching robotics and machine learning now for about 6 years. Here is a 2016 RNN model I made that self-generated multiple 3D printable objects.
I also did a lot of teaching with TensorflowJS here
In 2016, I may have made the first computer generated 3D printable object using a machine learning program!
A few years ago, my students chose to work with the Nano-33-Ble-Sense and we purchased a $10 OV7670 camera to go with it.
This is my first successful OV7670 image
In September 2020, I helped problem solve the OV767x camera with the Nano-33-Ble-Sense here and also got it working on EdgeImpulse here. This was the first step that I believe helped get working the Arduino TinyML kit.
This image is of my first clear 48x48 pixel image from the OV7670 camera uploaded to EdgeImpulse for a classification model.
I would be interested in partnering with grad students or professors trying to simplify machine learning. I can provide students to test their ideas and maybe give feedback about the steps that were most confusing. In this presentation, I will suggest multiple paths for advanced use of TinyML. I will not have time to continue to research all of these ideas, but would help support others interested in them. I turn 60 next year and will retire from teaching in a few years. I would like to continue working part time on tech solutions after retiring if anyone is hiring.
This presentation is going to seem like I don't like the Arduino TinyML Kit with EdgeImpulse, I love it, I just need to push tech boundaries. It is part of my personality.
A few Robotics and ML projects my students are working on:
TinyML: Multiple constraints (cost, computer power, electrical power, data security, connectivity...)
We wish tinyML hardware cost less than a dollar, did all training and analysis client-side, had 5G connectivity, ran on a coin battery for multiple years, and had the computing power of a TPU, but that dream is not a reality. The reality is we have constraints: In 2022 these look like:
For hardware, we use the $60.00 USD Arduino ML Kit (Arduino Nano 33 BLE Sense with a OV7675 Camera),
which is a 3.3V Nordic nRF52840 chip with 14 pins at 15 mA per pin, 64 MHz clock, 1MB flash memory and 256 KB SRAM
For connectivity, we can use BLE or Serial:UART/I2C/SPI
For machine learning simplicity and cloud training, we use edgeimpusle.com
EdgeImpulse makes it fairly easy to do: motion, sound, vision (classification and FOMO) and also regression (for size) and anomaly detection (for differences).
Sometimes, in a classroom computer lab that has locked-down software, it is difficult to load the EdgeImpulse Client on an Arduino
Install the client software on my laptop and individually install it on the students' Arduino's
Use a cell phone for data collection (Motion, Sound, Vision) and build as normal on EdgeImpulse, then download the Arduino build for model installation on the device.
Have students build the EdgeImpulse client firmware from scratch. Nano 33 Ble Github here, Portenta Github here. This has a few extra issues, such as long file names and storing a build.local.txt to your arduino hardware file that you then have to remove for normal arduino compiling.
Create a .hex or .bin file of the client and force installing it using Arduino installation tools. This seems to be frowned upon by the Arduino community, but seems like a very sensible solution to me. I think I have done it before, but it was too confusing to teach to my students.
This year, I hope NodeJS is installed on my new computers.
It is common for students to build a model that has a very high success rate on EdgeImpulse with the student's collected data, but when the student tests it in a real environment it performs poorly or not at all. This can also have several reasons.
Sometimes the students software changes to test their model have bugs in their code. This is fixable but needs the instructor to have a very good knowledge of coding.
A great teaching opportunity for understanding machine learning. With guidance the students learn how to make a better model using better, more realistic data, or just better thought out data collection.
For vision models, switching the concept from 3D to 2D often helps. So the camera is positioned in a way that the incoming data is always showing the same face. (Camera above a conveyor belt). This often simplifies vision data collection.
Students often have difficulty realising that for the problem they are trying to solve the Arduino ML kit does not have the computing power to solve it at the accuracy required. This is also a good ML learning experience. More on solutions for this later.
In a teaching computer lab, sometimes the GNU C++ environment is not well setup. Using GNU C++ is not really a strength of mine since I typically use the Arduino IDE or Platform.io, but occasionally some code would be better compiled using C++.
Gitpod is an amazing browser based docker container giving 50 hours a month for free student use. The containers save but are only active for 10-30 minutes after the last entered command. I often test Github node projects using Gitpod by simply inserting
gitpod.io/#in front of the normal Github URL.
For this solution, I tried to make a Gitpod that loads all the development environments that edgeImpulse uses. It was only partially successful and GNU C++ is a too advanced for my average High School student.
This is my Gitpod of the edgeimpulse dev environment. It is fairly advanced and may not have survived updates to EdgeImpulse. my-gitpod-of-edge-impulse
This is not necessarily to do with machine learning but whenever students work on their own robotics projects they always want a few more pins to allow a few more servo motors, LED's, sensors etc for their final project. The Arduino ML kits have 14 available pins.
Use the expensive but more powerful ($113.90 USD) Arduino PortentaH7 with 28 pins and potentially up to 160 pins
Note: I am working on a PCB to allow access to all 160 pins without using the Arduino Breakout Board ($55 USD). Note: The breakout board has the pins organised by concept. My board has the pins organised by the High Density connectors J1 and J2
Use multiple microcontrollers connected by Serial: UART, I2C, or SPI
Example of 4 x $5 XIAO all running tensorflow Sine Hello World program, all connected using the I2C protocol.
For the Nano 33 Ble Sense (in my opinion) BLE is frustrating to code as you must know or discover hash numbers for everything you wish to code.
The PortentaH7 LoRa Vision Shield ($69 USD USD) I think LoRa and LoRaWan connectivity makes sense for any low power application. Note: The Helium LoRaWan network is a solid solution especially in North America and Europe. Here is my writeup about using the Portenta with Helium and adafruit.io
The Helium LoRaWan Coverage Map for Spain
If electrical power is not an issue, why not use WiFi or Ethernet? The Portenta can have both. They allow a large data bandwith. If monthly cost is not an issue then Celluar could be considered (This I have never tested), options are presently 2G and 3G with the Portenta CAT , but I am sure 4G and 5G are coming soon. Full connectivity also opens up the door for cloud classification, like SiRi and other methods. Not really sure if that is still TinyML, but is something students should consider to solve specific ML problems.
Eventually a set of very cheap microcontrollers will be available, hopefully with LoRaWan capability, camera, and/or sound and/or motion, but presently the main solution is to make your own PCB.
Many of my students can both 3D print and computer animate. We found several students quickly understood the main issues around PCB development from this one simple video for JLCPCB and easyEDA
here for which the cost is about $35 for 5 boards and takes about 10 days to deliver.
Excellent video on making a PCB
<iframe width="300" height="150" src="https://www.youtube.com/embed/gjPNYMRA0m8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>Note: The $6000 Voltera.io is a possible educational solution for at school PCB production.
This is only cheaper since most students have a cell phone! I think we could do the machine learning on a cell phone using TensorflowJS and then use UsbSerial to connect any microcontroller like the Nano 33 Ble Sense or the $5 Seeedstudio XIAO. This has lots of teaching potential and is my main research area at present. Edgeimpulse even outputs a WASM for the web.
I spent some time connecting multiple Nano 33 Ble sensors to EdgeImpulse and have lots of examples. This is my Maker101 repository folder link here
This folder goes through steps to get raw data from the Nano 33 Ble Sense into EdgeImpulse. This needs the EdgeImpulse serial client connectivity on your computer. Much easier if you can get EdgeImpulse webUSB connectivity working, but that does not always work for me without having the EdgeImpulse client installed on the device.
Image of the Nano33 Ble Sense showing all raw sensor data, shifted vertically to allow each sense to have its own row.
Don't get me wrong, EdgeImpulse is amazing for simplifying machine learning. I can teach an entire class of grade 10s in 40 minutes how to make a vision classification model to classify a pen from unknowns and they actually understand how to do it, but making your own Keras models that work well with complete Keras flexibility is harder.
Here is a link to my maker101 draft repository with some ideas on this topic here
Use TensorflowJS and WebSerial. Getting EdgeImpulse models to tensorflowJS is a bit confusing. It would be nice if the data could be exported and loaded directly onto the browser, or better yet loaded completely in the browser
TensorflowLIte
My old version of TensorflowLite adapted for the Portenta here. Note: The Arduino_TensorflowLite library seems to have disappeared, so here is a backup in zip format that can easily be installed with the Arduino library zip upload. Arduino_TensorFlowLite.zip
ALFES on board ML for Arduino my video here ALFES Github here
Just saw MicroFlow I know nothing about it.
Teaching Frustration: Students should not be loading their faces onto a Cloud Server for youth privacy issues
My Solution 1 (Security) Use TensorflowJS and WebSerial, load data completely client side. I am working on this solution.
Teaching Frustration: relying on any cloud server. (Typically a good cloud server is purchased after a few years and the free component is removed!)
This probably will not happen with Edge Impulse, but if it does it can be devastating to your course.
Example: July 14, 2016 I spent had about 2 years designing Android tools for the browser on Cloud9, after the AWS purchase all my work was no longer free and worse was deprecated.
My Solution 1 (Cloud) Use TensorflowJS and WebSerial, load data completely client side.
My Solution 2 (Cloud)
Use TensorflowJS and TensorflowLite (or my version here), load data completely client side and transfer the model to the Nano 33 Ble Sense or other microcontroller. If TensorflowLite stays deprecated this is not a great idea. Here is a backed up TFlite library: Arduino_TensorFlowLite.zip
Edgeimpulse.com and the Arduino ML kit are a great way to easily get students started in TinyML, but how to continue for advanced students? I hope to partner with a few professors to try some learning extensions. Here are a few of my relevant references:
My Robotics and ML course Maker100
My Portenta MBED library called the portenta-pro-community-solutions which will have some Nano 33 Ble Sense working code
My Maker101 repository that addresses many of the ideas mentioned today.
My Twitter feed for easy communication @rocksetta
Image of the PortentaH7 ( with LoRa Vision shield attached behind it) running a 72 ms FOMO vision model with a 128x128 WaveShare 16 bit Grayscale OLED, and my Portenta Pro Community Solutions library of MBED examples
#### 17 # Summary Web of the topics covered today.
Template for this from pecha-kucha-lightning-talks-template
By Jeremy Ellis Twitter @Rocksetta Use at your own Risk!
Note when looking at the markdown none of the javascript buttons appear, you must go to your Gitpages Demo Link!
A few Javascript abilites do not work, such as hiding the code. So all the Javascript not in buttons is below.
Note:
<script> let myIndex = 1; let myLooper = 0; let myCounting = 0; let myMainNum = 20; let myCountUp = 0; let xSlide = 3; let myAudio01 = new Audio(); ; function carousel() { clearInterval(myCounting); myCountUp = -1; var i; ; myIndex++; if (myIndex > xSlide) {myIndex = xSlide}; window.location.href='#'+myIndex; myCountDown(); myCounting = setInterval(myCountDown, 1000); myLooper = setTimeout(carousel, myMainNum*1000); } function myCountDown(){ myCountUp++; if (myCountUp >= myMainNum ) { myCountUp = myMainNum; } if (myIndex >= xSlide && myMainNum == myCountUp){ document.getElementById("myNumSlides").innerHTML = ` Slide ${myIndex} of ${xSlide} slides. ALL DONE `; clearInterval(myCounting); clearInterval(myLooper); } else { document.getElementById("myNumSlides").innerHTML = ` Slide ${myIndex} of ${xSlide} slides. ${myMainNum-myCountUp} seconds remaining `; } } ; function myNext(){ xSlide = document.getElementById('myCountLinks').value; myMainNum = document.getElementById('myCountMax').value; clearInterval(myLooper) ; carousel(); } ; </script>