<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gabe Hollombe</title>
    <description>The latest articles on Forem by Gabe Hollombe (@gabehollombe).</description>
    <link>https://forem.com/gabehollombe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gabehollombe"/>
    <language>en</language>
    <item>
      <title>How to build a custom image classifier model and run it at the edge in your web browser!</title>
      <dc:creator>Gabe Hollombe</dc:creator>
      <pubDate>Wed, 27 Nov 2019 08:04:36 +0000</pubDate>
      <link>https://forem.com/aws/how-to-build-a-custom-image-classifier-model-and-run-it-at-the-edge-in-your-web-browser-2llj</link>
      <guid>https://forem.com/aws/how-to-build-a-custom-image-classifier-model-and-run-it-at-the-edge-in-your-web-browser-2llj</guid>
      <description>&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we will build a custom image classifier in the cloud using &lt;a href="https://aws.amazon.com/sagemaker/" rel="noopener noreferrer"&gt;Amazon SageMaker&lt;/a&gt;, convert the model to the open &lt;a href="https://onnx.ai" rel="noopener noreferrer"&gt;ONNX model format&lt;/a&gt;, download the ONNX model, then run it in our web browser using &lt;a href="https://github.com/microsoft/onnxjs" rel="noopener noreferrer"&gt;ONNX.js&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  First, show me a working demo!
&lt;/h2&gt;

&lt;p&gt;If you don't want to train your own model but you still want to see a working demo, you can download an example model and see it in action using &lt;a href="https://glitch.com/~gabehollombe-aws-sagemaker-image-classifier-to-onnx-in-browser" rel="noopener noreferrer"&gt;this simple single-page-app front end&lt;/a&gt; (shared via Glitch):&lt;/p&gt;


&lt;div class="glitch-embed-wrap"&gt;
  &lt;iframe src="https://glitch.com/embed/#!/embed/gabehollombe-aws-sagemaker-image-classifier-to-onnx-in-browser?previewSize=100&amp;amp;path=index.html" alt="gabehollombe-aws-sagemaker-image-classifier-to-onnx-in-browser on glitch"&gt;&lt;/iframe&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  OK! How do I do this for myself?
&lt;/h2&gt;

&lt;p&gt;If you know a bit of Python, training your own custom image classifier is surprisingly easy using Amazon SageMaker. SageMaker will host a Jupyter Notebook compute environment for you, which you can use to prep your data, train your model, and even deploy your trained model to a fully-managed hosted endpoint (but we won't do that last bit for this blog post, since we want to take a model that we've trained in the cloud and download it for offline inference).&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting a Jupyter Notebook instance set up
&lt;/h3&gt;

&lt;p&gt;First, log in to your AWS account and go to the Amazon SageMaker web console. If you don't have an AWS account yet, visit &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;https://aws.amazon.com/&lt;/a&gt; to create an account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsgu4m0lub2galdo9nzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsgu4m0lub2galdo9nzh.png" alt="sagemaker web console" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, look for a 'Create notebook instance' button and click it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep7up15cogh7fcnbdfnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep7up15cogh7fcnbdfnr.png" alt="create notebook instance button" width="358" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you'll need to pick a name for your notebook instance, and select or create a new IAM role for the notebook instance to assume when it runs. If you're unsure, let the console create a role for you here. You can leave the &lt;em&gt;notebook instance type&lt;/em&gt; set to the default of &lt;em&gt;ml.t2.medium&lt;/em&gt;.  Even though we'll use a relatively low-powered compute instance type here, we will be able to use an on-demand high-powered deep learning optimized instance type for the duration of our model training. &lt;/p&gt;

&lt;p&gt;Click 'Create notebook instance' at the bottom of the form to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4o7n56r7ifb0dfq4g2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4o7n56r7ifb0dfq4g2u.png" alt="create notebook instance form" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a minute or two, your notebook instance will switch status from &lt;em&gt;Pending&lt;/em&gt; to &lt;em&gt;Active&lt;/em&gt; and you can click &lt;em&gt;Open Jupyter Lab&lt;/em&gt; to open the notebook interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F458bo0y44nxgzcn6tssz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F458bo0y44nxgzcn6tssz.png" alt="open jupyter lab" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, look on the left-hand sidebar and click the Git icon to clone a repository. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvd7azkpnjcvz3mbavzw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvd7azkpnjcvz3mbavzw0.png" alt="open clone dialog" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste &lt;code&gt;https://github.com/gabehollombe-aws/sagemaker-image-classifier-to-onnx-in-browser.git&lt;/code&gt; into the dialog box and click &lt;em&gt;Clone&lt;/em&gt;. This will clone a repo containing a sample notebook file for you to use to train your own image classifier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2d6uzwgk3nz4ytsqkte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2d6uzwgk3nz4ytsqkte.png" alt="clone dialog" width="321" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the left-hand sidebar, navigate to the cloned repo directory, open the &lt;em&gt;sagemaker&lt;/em&gt; directory inside, and open the notebook inside it, named &lt;code&gt;train_and_export_as_onnx.ipynb&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg73emlf7c42xhzqfuhs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg73emlf7c42xhzqfuhs.png" alt="open cloned sagemaker folder" width="418" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building a custom image classifier model with Amazon Sagemaker and converting it to ONNX format
&lt;/h3&gt;

&lt;p&gt;Take a look at the &lt;code&gt;train_and_export_as_onnx.ipynb&lt;/code&gt; notebook file. You'll see a lot of annotated steps that show how to prep some images for classification and then how to use the &lt;a href="https://sagemaker.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;AWS SageMaker Python SDK&lt;/a&gt; to train a custom image classifier with our image data. &lt;/p&gt;

&lt;p&gt;Take note that this notebook will try to use one ml.p3.2xlarge spot instance for the model training which, at the time of this writing, will come out to cost about $0.15 USD in training costs using the sample data. If you don't want to incur any costs for training a model. you can use the pre-trained model linked in the Glitch-hosted app embedded at the top of this post.&lt;/p&gt;

&lt;p&gt;In the section titled &lt;strong&gt;Grab a bunch of images grouped by folders, one per label class&lt;/strong&gt;, you'll see we are downloading a collection of images to use for our training. &lt;/p&gt;

&lt;p&gt;If you want to use your own custom images instead of these example ones, just modify the notebook to fill the &lt;code&gt;dataset_dir&lt;/code&gt; with appropriately named sub-directories (each directory should be named with a label describing the class of images inside it) and put a bunch of example images in each label sub-directory. But, for the purposes of this blog post, I'll assume you're just going to use the set of images from &lt;a href="http://www.vision.caltech.edu/Image_Datasets/Caltech101/" rel="noopener noreferrer"&gt;the Caltech 101 data set&lt;/a&gt; that the notebook downloads by default. &lt;/p&gt;

&lt;p&gt;From the &lt;em&gt;Run&lt;/em&gt; menu, select &lt;em&gt;Run All Cells&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7fnm30cdeyks1k3nnhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7fnm30cdeyks1k3nnhg.png" alt="run all cells" width="542" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's going to take some time for the notebook to train a custom image classifier model.  You'll know it's on the right track because eventually you'll start to see some training log output under the &lt;strong&gt;Start the training&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnwfm6w49yx41cskqea2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnwfm6w49yx41cskqea2.png" alt="training output" width="800" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eventually (after about 20 minutes or so) the training job will finish. Continue down a bit further in the notebook's output and you should see the cells that download the SageMaker-built model and covert it to the open ONNX format. Find the cell output providing a link to download the ONNX model and click it to get the ONNX model onto your computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15l2wxzrhr9cxanjft2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15l2wxzrhr9cxanjft2h.png" alt="training finished" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we'll need to know a list of all of the label classes that our model will provide scores on on when we use it for inference to classify new inputs. Find the cell showing the list of space-delimited class labels and copy that output to your clipboard for later use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr0urbl352moted2bbao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr0urbl352moted2bbao.png" alt="image label classes" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Using our ONNX image classifer model in the browser with ONNX.js
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/onnxjs" rel="noopener noreferrer"&gt;ONNX.js&lt;/a&gt; makes it possible to run inference through ONNX models in the browser (or in Node) and they even have a nice &lt;a href="https://microsoft.github.io/onnxjs-demo/#/" rel="noopener noreferrer"&gt;demo website&lt;/a&gt; showing how to use ONNX.js with some pre-trained models. &lt;/p&gt;

&lt;p&gt;However, I wanted a bit of a nicer interface to play around with, and I wanted to use my own custom image classifier trained via SageMaker, not one of the pre-trained models from the ONNX model zoo.  So, I built a little React single-page-app to let you load an ONNX model from your computer into your browser's memory and then perform inferences against images captured from a webcam, image URLs from the Internet, or images that you can drag-and-drop from your computer.&lt;/p&gt;

&lt;p&gt;After you've downloaded your custom image classifier ONNX model above, you can use my in-browser inference app to try it out.&lt;/p&gt;

&lt;p&gt;Visit &lt;a href="https://gabehollombe-aws-sagemaker-image-classifier-to-onnx-in-browser.glitch.me/" rel="noopener noreferrer"&gt;https://gabehollombe-aws-sagemaker-image-classifier-to-onnx-in-browser.glitch.me/&lt;/a&gt; to load the inference app in your browser.&lt;/p&gt;

&lt;p&gt;Or, check out the repository containing my sample Jupyter notebook and the inference app on GitHub at &lt;a href="https://github.com/gabehollombe-aws/sagemaker-image-classifier-to-onnx-in-browser" rel="noopener noreferrer"&gt;https://github.com/gabehollombe-aws/sagemaker-image-classifier-to-onnx-in-browser&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Awesome! Where can I learn more?
&lt;/h2&gt;

&lt;p&gt;I think you should start with AWS's &lt;a href="https://aws.amazon.com/training/learning-paths/machine-learning/" rel="noopener noreferrer"&gt;free Machine Learning training materials&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can also learn more about building, training, and hosting ML models with SageMaker on &lt;a href="https://aws.amazon.com/sagemaker/" rel="noopener noreferrer"&gt;the Amazon SageMaker product page&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>javascript</category>
      <category>machinelearning</category>
      <category>react</category>
    </item>
  </channel>
</rss>
