This repository is an example on how to bring your own model into Edge Impulse. This repository contains a small fully-connected model built in PyTorch. If you want to see a more complex PyTorch example, see edgeimpulse/yolov5. Or if you're looking for the Keras example, see edgeimpulse/example-custom-ml-block-keras.
As a primer, read the Bring your own model page in the Edge Impulse docs.
You run this pipeline via Docker. This encapsulates all dependencies and packages for you.
-
Install Docker Desktop.
-
Install the Edge Impulse CLI v1.16.0 or higher.
-
Create a new Edge Impulse project, and add data from the continuous gestures dataset.
-
Under Create impulse add a 'Spectral features' processing block, and a random ML block.
-
Open a command prompt or terminal window.
-
Initialize the block:
$ edge-impulse-blocks init # Answer the questions, select "other" for 'What type of data does this model operate on?'
-
Fetch new data via:
$ edge-impulse-blocks runner --download-data data/
-
Build the container:
$ docker build -t custom-ml-pytorch .
-
Run the container to test the script (you don't need to rebuild the container if you make changes):
$ docker run --rm -v $PWD:/app custom-ml-pytorch --data-directory /app/data --epochs 30 --learning-rate 0.01 --out-directory out/
-
This creates two .tflite files and a saved model ZIP file in the 'out' directory.
If you have extra packages that you want to install within the container, add them to requirements.txt
and rebuild the container.
To get up-to-date data from your project:
-
Install the Edge Impulse CLI v1.16 or higher.
-
Open a command prompt or terminal window.
-
Fetch new data via:
$ edge-impulse-blocks runner --download-data data/
You can also push this block back to Edge Impulse, that makes it available like any other ML block so you can retrain your model when new data comes in, or deploy the model to device. See Docs > Adding custom learning blocks for more information.
-
Push the block:
$ edge-impulse-blocks push
-
The block is now available under any of your projects. Depending on the data your block operates on, you can add it via:
- Object Detection: Create impulse > Add learning block > Object Detection (Images), then select the block via 'Choose a different model' on the 'Object detection' page.
- Image classification: Create impulse > Add learning block > Transfer learning (Images), then select the block via 'Choose a different model' on the 'Transfer learning' page.
- Audio classification: Create impulse > Add learning block > Transfer Learning (Keyword Spotting), then select the block via 'Choose a different model' on the 'Transfer learning' page.
- Classification: Create impulse > Add learning block > Classification, then select the block via 'Add an extra layer' on the 'Classifier' page.
- Regression: Create impulse > Add learning block > Regression, then select the block via 'Add an extra layer' on the 'Regression' page.
This is the simplest implementation of the paper "Gradient-based learning applied to document recognition" in PyTorch.
Python3
PyTorch >= 0.4.0
torchvision >= 0.1.8
$git clone https://github.com/ChawDoe/LeNet-5-MNIST-PyTorch.git
$cd LeNet5-MNIST-PyTorch
$python3 train.py
model will now run on GPU if available
This repo includes the mnist dataset.
Average precision on test set: 99%