<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: //va</title>
    <description>The latest articles on Forem by //va (@vabarbosa).</description>
    <link>https://forem.com/vabarbosa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vabarbosa"/>
    <language>en</language>
    <item>
      <title>Developing Machine Learning IoT Apps with Node-RED and TensorFlow.js</title>
      <dc:creator>//va</dc:creator>
      <pubDate>Tue, 28 Jan 2020 16:09:29 +0000</pubDate>
      <link>https://forem.com/vabarbosa/developing-machine-learning-iot-apps-with-node-red-and-tensorflow-js-kce</link>
      <guid>https://forem.com/vabarbosa/developing-machine-learning-iot-apps-with-node-red-and-tensorflow-js-kce</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;cross-post from &lt;a href="https://medium.com/codait/a-low-code-approach-to-incorporating-machine-learning-into-your-iot-device-24f3f2a70717"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  A low-code approach to incorporating machine learning into the Internet of Things
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HRp-hVJQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/552/1%2AQjix2iqRLBpH4cMTm-NfxQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HRp-hVJQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/552/1%2AQjix2iqRLBpH4cMTm-NfxQ.png" alt="" width="552" height="655"&gt;&lt;/a&gt;A comic where a guy, frowning, says to another, "My coffee machine has unfollowed me." &lt;a href="http://geek-and-poke.com/geekandpoke/2012/3/14/the-internet-of-things.html"&gt;The Internet of Things — Geek &amp;amp; Poke&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gartner has predicted the number of connected devices will rise to over &lt;a href="https://www.gartner.com/en/documents/3890506/top-strategic-iot-trends-and-technologies-through-2023"&gt;25 billion by 2021&lt;/a&gt;. And given the variety of devices out there, getting started with IoT can appear daunting. Setting up communication with or between these devices is often non-trivial. Further challenges arise if you want to integrate machine learning! Solutions need to pull together different device APIs, services, and sometimes protocols.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.ibm.com/patterns/develop-a-machine-learning-iot-app-with-node-red-and-tensorflowjs/?utm_medium=OSocial&amp;amp;utm_source=Blog&amp;amp;utm_content=000019RS&amp;amp;utm_term=10004805&amp;amp;utm_id=CODAIT-medium-TFjs-Node-RED-CodePattern&amp;amp;cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-CODAIT-medium-TFjs-Node-RED-CodePattern&amp;amp;cm_mmca1=000019RS&amp;amp;cm_mmca2=10004805"&gt;Node-RED with TensorFlow.js&lt;/a&gt; brings machine learning to IoT in an easy, low-code way. It opens new, creative approaches to enable machine learning for the Internet of Things. Image recognition, audio classification, etc. all possible on device with minimal code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter Node-RED
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt; is a flow-based visual programming tool. With its browser-based editor you can simply wire together hardware devices, APIs and online services to create your application. You develop powerful applications by connecting nodes instead of writing code. And you can deploy them with a single click.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ksGeUD26Mw0"&gt;
&lt;/iframe&gt;&lt;a href="https://www.youtube.com/playlist?list=PLyNBB9VCLmo1hyO-4fIZ08gqFcXBkHy-6"&gt;Node-RED Essentials playlist&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Node-RED runs on local workstations, the cloud, and edge devices. It has become an ideal tool for the Raspberry Pi and other low cost hardware.&lt;/p&gt;

&lt;p&gt;The Node-RED runtime is lightweight and built on top of Node.js. It takes full advantage of Node.js’ event-driven, non-blocking I/O model. There is also the added benefit of tapping into the most used programming language — JavaScript!&lt;/p&gt;

&lt;p&gt;With all the Node-RED community resources and vast NPM ecosystem, you can create IoT flows in a user-friendly manner that are &lt;a href="https://github.com/johnwalicki/Node-RED-DroneViewer"&gt;imaginative&lt;/a&gt; and &lt;a href="https://developer.ibm.com/callforcode/blogs/call-for-code-2019-finalist-prometeo/?utm_medium=OSocial&amp;amp;utm_source=Blog&amp;amp;utm_content=000019RS&amp;amp;utm_term=10004805&amp;amp;utm_id=CODAIT-medium-CallForCode-Prometeo&amp;amp;cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-CODAIT-medium-CallForCode-Prometeo&amp;amp;cm_mmca1=000019RS&amp;amp;cm_mmca2=10004805"&gt;help save lives&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hello TensorFlow.js
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.tensorflow.org/js"&gt;TensorFlow.js&lt;/a&gt; is an open source JavaScript library. It permits you to build, train, and run &lt;a href="https://medium.com/codait/bring-machine-learning-to-the-browser-with-tensorflow-js-part-i-16924457291c"&gt;machine learning models in the browser&lt;/a&gt; and Node.js.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/pbCExciEbrc"&gt;
&lt;/iframe&gt;TensorFlow.js intro
&lt;/p&gt;

&lt;p&gt;Often, enabling AI capabilities involves sending the data from a device to a server. The calculations happen on the server and the results returned to the device to take action. This is not ideal when data security or network reliability is a problem.&lt;/p&gt;

&lt;p&gt;But, with TensorFlow.js there is an increase in privacy and data security. The data does not leave the device! Training and predictions can happen directly on the device collecting the data. This also makes it possible to run offline and in remote locations with no connectivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Node-RED meet TensorFlow.js
&lt;/h3&gt;

&lt;p&gt;The combination of Node-RED and TensorFlow.js means you can build IoT apps that use machine learning simply by dragging and dropping. Drag and drop a machine learning node, wire it up, and deploy to your device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mfS4E5Cb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1200/1%2AQsz62TxlYhrd7ZGYFHbvwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mfS4E5Cb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1200/1%2AQsz62TxlYhrd7ZGYFHbvwg.png" alt="" width="880" height="318"&gt;&lt;/a&gt;&lt;a href="https://github.com/IBM/node-red-tensorflowjs"&gt;Node-RED node with a TensorFlow.js Object Detection model&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But to get to that point you need to have the TensorFlow.js nodes available. TensorFlow.js nodes are starting to show up in the &lt;a href="https://flows.nodered.org/search?term=tensorflow"&gt;Node-RED library&lt;/a&gt; and &lt;a href="https://github.com/vabarbosa/tfjs-node-red"&gt;across&lt;/a&gt; &lt;a href="https://github.com/tonanhngo/nodered-tfjs"&gt;GitHub&lt;/a&gt; &lt;a href="https://github.com/yhwang/node-red-contrib-ds2-tfjs"&gt;repos&lt;/a&gt; and more are being released regularly. These nodes provide various machine learning functionality to add to your flow. But what if there isn’t a TensorFlow.js node for your machine learning task? You can &lt;a href="https://nodered.org/docs/creating-nodes/"&gt;create it&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;The extensibility of Node-RED allows you to create custom nodes for your needs. Packing Node-RED nodes is similar to packaging Node.js modules, but with some extra information.&lt;/p&gt;

&lt;p&gt;A Node-RED node consists of three main files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://nodered.org/docs/creating-nodes/packaging#packagejson"&gt;package.json&lt;/a&gt;&lt;/strong&gt;: standard file used by Node.js modules, but with an added &lt;code&gt;node-red&lt;/code&gt; section&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodered.org/docs/creating-nodes/node-js"&gt;JavaScript file&lt;/a&gt; that defines the node’s behavior&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodered.org/docs/creating-nodes/node-html"&gt;HTML file&lt;/a&gt; that defines the node’s properties, edit dialog and help text&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://github.com/IBM/node-red-tensorflowjs/blob/master/node-red-contrib-tfjs-object-detection/node/tfjs-object-detection.js#L24-L59"&gt;JavaScript file is where you would wrap your TensorFlow.js code&lt;/a&gt;. It would load the TensorFlow.js model and run the prediction.&lt;/p&gt;

&lt;p&gt;Once bundled the custom node is available to wire into a flow and deploy.&lt;/p&gt;

&lt;h3&gt;
  
  
  There may be challenges
&lt;/h3&gt;

&lt;p&gt;As straightforward as it may appear, there can still be challenges and concerns to keep in mind.&lt;/p&gt;

&lt;p&gt;Since, you are dealing with edge devices performance is an utmost priority. Models may be too big to load onto an edge device. Or may require specific optimizing to perform well in JavaScript.&lt;/p&gt;

&lt;p&gt;Also, when in the node’s life-cycle should you load the model? Should you have a single node to process input/output and run prediction? Or split the work across two or three nodes?&lt;/p&gt;

&lt;p&gt;The TensorFlow.js model you are using and the specific use case it addresses often dictate the approach and answers to a lot of these concerns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started
&lt;/h3&gt;

&lt;p&gt;Combining TensorFlow.js with Node-RED lowers the barrier to entry into machine learning. From the drag-and-drop interface to the one click deploy, IoT enthusiasts and developers can incorporate machine learning in an accessible and rapid manner.&lt;/p&gt;

&lt;p&gt;Visit the &lt;a href="https://developer.ibm.com/patterns/develop-a-machine-learning-iot-app-with-node-red-and-tensorflowjs/?utm_medium=OSocial&amp;amp;utm_source=Blog&amp;amp;utm_content=000019RS&amp;amp;utm_term=10004805&amp;amp;utm_id=CODAIT-medium-TFjs-Node-RED-CodePattern&amp;amp;cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-CODAIT-medium-TFjs-Node-RED-CodePattern&amp;amp;cm_mmca1=000019RS&amp;amp;cm_mmca2=10004805"&gt;Node-RED and TensorFlow.js code pattern&lt;/a&gt; to check out a sample solution. Learn more by viewing the full code and deploying the tutorial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S4g9bmd8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2348/1%2AONXzgEyVHm97hgFtAid4hA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S4g9bmd8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2348/1%2AONXzgEyVHm97hgFtAid4hA.png" alt="" width="880" height="295"&gt;&lt;/a&gt;&lt;a href="https://flows.nodered.org/add"&gt;Contribute to the Node-RED Library&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://flows.nodered.org/"&gt;Node-RED library&lt;/a&gt; for more TensorFlow.js nodes, flows, and collections as they become available. Anyone is &lt;a href="https://flows.nodered.org/add"&gt;welcome to contribute there&lt;/a&gt; so that others can learn from your work.&lt;/p&gt;




</description>
      <category>javascript</category>
      <category>iot</category>
      <category>tensorflow</category>
      <category>nodered</category>
    </item>
    <item>
      <title>Veremin — A Browser-based Video Theremin</title>
      <dc:creator>//va</dc:creator>
      <pubDate>Thu, 07 Feb 2019 16:16:00 +0000</pubDate>
      <link>https://forem.com/vabarbosa/veremina-browser-based-video-theremin-4a37</link>
      <guid>https://forem.com/vabarbosa/veremina-browser-based-video-theremin-4a37</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;cross-post from &lt;a href="https://medium.com/ibm-watson-data-lab/veremin-a-browser-based-video-theremin-1548b63200c"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Making music visually using TensorFlow.js, PoseNet, and the Web MIDI &amp;amp; Web Audio APIs
&lt;/h4&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZCs8LBBZqas"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Meet Veremin!
&lt;/h3&gt;

&lt;p&gt;Veremin, is a video &lt;a href="https://en.wikipedia.org/wiki/Theremin"&gt;theremin&lt;/a&gt; that allows anyone to make beautiful (:-?) music just by waving their arms! It makes use of &lt;a href="https://js.tensorflow.org/"&gt;TensorFlow.js&lt;/a&gt;, &lt;a href="https://github.com/tensorflow/tfjs-models/tree/master/posenet"&gt;PoseNet&lt;/a&gt;, as well as the &lt;a href="https://webaudio.github.io/web-midi-api/"&gt;Web MIDI&lt;/a&gt; and &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API"&gt;Web Audio&lt;/a&gt; APIs.&lt;/p&gt;

&lt;p&gt;Veremin is the brainchild of &lt;a href="https://medium.com/u/466a32305746"&gt;johncohnvt&lt;/a&gt;, from the MIT-IBM Watson AI Lab, who built the first rough prototype. I was then able to whip it into something that really worked!&lt;/p&gt;

&lt;p&gt;The application attaches to the video stream from your web camera. PoseNet is used to capture the location of your hands within the video. The location then gets converted to music.&lt;/p&gt;

&lt;p&gt;Thanks to the magic of TensorFlow.js, Veremin lives 100% in the browser and works on all modern browsers (Chrome, Safari, Firefox, IE) and platforms (OS X, iOS, Android, Windows).&lt;/p&gt;

&lt;p&gt;And our deepest thanks to the &lt;a href="https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5"&gt;Google Creative Lab&lt;/a&gt; folks who gave us a great start with their &lt;a href="https://github.com/tensorflow/tfjs-models/tree/master/posenet/demos"&gt;demo apps&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Play Veremin!
&lt;/h3&gt;

&lt;p&gt;Just point your browser to &lt;a href="https://ibm.biz/veremin"&gt;ibm.biz/veremin&lt;/a&gt; on your desktop, laptop, tablet, or phone. Allow the application to use the camera when prompted and make sure the volume is up.&lt;/p&gt;

&lt;p&gt;Stand in front of your devices camera and adjust your position so your torso fits the screen . Adjust your stance so you are centered on the vertical red line in the center of the screen and your waist is roughly even with the horizontal red line . You should see the stick version of your form in blue. Now, move both your hands above the red horizontal line. Move your right hand up and down to control the pitch and your left hand left and right to control the volume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/BradleyHolt/status/1087404900219318273"&gt;&lt;em&gt;Now just get jiggy with it&lt;/em&gt;&lt;/a&gt;&lt;em&gt;! ┌(・⌣・)┘♪&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’re interested, you can adjust some of the parameters by clicking the settings icon in the top right of the screen. You can read more about the various control options &lt;a href="https://github.com/vabarbosa/veremin/blob/master/CONTROLPANEL.md"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Veremin as a MIDI controller
&lt;/h3&gt;

&lt;p&gt;If you’re feeling even more adventurous, Veremin can also be used as a &lt;a href="https://en.wikipedia.org/wiki/MIDI_controller"&gt;MIDI controller&lt;/a&gt;. To do that, you must use a browser that supports MIDI output (e.g., Chrome).&lt;/p&gt;

&lt;p&gt;Plugin in your MIDI device to your computer and launch Veremin in your browser. Then click the settings symbol in the upper right of the screen and change the &lt;strong&gt;Output Device&lt;/strong&gt; to point to your MIDI output device. You should now be able to control your MIDI device which can be anything from a simple software synthesizer (e.g., &lt;a href="http://notahat.com/simplesynth/"&gt;SimpleSynth&lt;/a&gt;) to a MIDI controlled Tesla Coil (like John uses).&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/75NPajFIYuI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s inside Veremin?
&lt;/h3&gt;

&lt;p&gt;Let’s quickly review all the technologies we use.&lt;/p&gt;

&lt;h4&gt;
  
  
  TensorFlow.js and PoseNet
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://medium.com/tensorflow/introducing-tensorflow-js-machine-learning-in-javascript-bf3eab376db"&gt;TensorFlow.js&lt;/a&gt; is an open-source library for creating, training, and running machine learning models in JavaScript. It brings machine learning to the browser and is a great way to start with machine learning. Tutorials, guides, and more information for TensorFlow.js are available &lt;a href="https://js.tensorflow.org/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While you can use TensorFlow.js to build and train models, the real fun comes from finding new and creative ways to interact with existing pre-trained machine learning models, like PoseNet.&lt;/p&gt;

&lt;p&gt;The TensorFlow.js version of &lt;a href="https://github.com/tensorflow/tfjs-models/tree/master/posenet"&gt;PoseNet&lt;/a&gt; allows for real-time human pose estimation in the browser. An image is passed to the model and it returns a prediction. The prediction contains a list of keypoints (i.e., right eye, left wrist, etc.) and their confidence scores. What you do with this information is left up to your imagination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UzYtnbw2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/548/1%2AS_iccO3qLT2yqkrm7WrxmA.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UzYtnbw2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/548/1%2AS_iccO3qLT2yqkrm7WrxmA.gif" alt="" width="548" height="540"&gt;&lt;/a&gt;Real-time human pose estimation&lt;/p&gt;

&lt;h4&gt;
  
  
  Web MIDI API
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.w3.org/TR/webmidi"&gt;Web MIDI API&lt;/a&gt; allows connections to MIDI input and output devices from browsers. From the connected devices, MIDI messages can be sent or received. The MIDI message (e.g. [128, 72, 64]) is an array of three values corresponding to [command, note, velocity].&lt;/p&gt;

&lt;p&gt;MIDI messages are received only from input devices (e.g., keyboard). And can be sent only to outputdevices (e.g., speakers). To request access to MIDI devices (and receive a list of connected inputs and outputs) a call must first be made to the requestMIDIAccess function.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;a href="https://caniuse.com/#feat=midi"&gt;Support for the Web MIDI API&lt;/a&gt; is unfortunately not yet wide spread. A quick getting started article for the Web MIDI API can be found &lt;a href="https://www.smashingmagazine.com/2018/03/web-midi-api"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Audio API
&lt;/h4&gt;

&lt;p&gt;With the &lt;a href="https://www.w3.org/TR/webaudio"&gt;Web Audio API&lt;/a&gt;, browsers can create sounds or work with recorded sounds. It describes a high-level API for processing and synthesizing audio in web applications.&lt;/p&gt;

&lt;p&gt;All audio operations must occur within an AudioContext. Audio modules (i.e., AudioNodes) are created from the AudioContext and chained together to define the audio processing graph.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Working with the Web Audio API can be tricky at times. But to make it easier check out &lt;a href="https://tonejs.github.io/"&gt;Tone.js&lt;/a&gt;, a Web Audio framework for creating interactive music in the browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://caniuse.com/#feat=audio-api"&gt;Support for the Web Audio API&lt;/a&gt; is available across most browsers. A nice introduction to the Web Audio API can be found &lt;a href="https://css-tricks.com/introduction-web-audio-api"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enjoy!
&lt;/h3&gt;

&lt;p&gt;If you’re interested in the nitty gritty, head over the &lt;a href="https://github.com/vabarbosa/veremin"&gt;Veremin GitHub repository&lt;/a&gt; to check out the full code and learn more. The README includes instructions for deploying your own Veremin or to try it out without installing anything, visit &lt;a href="https://ibm.biz/veremin"&gt;ibm.biz/veremin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We hope you enjoy Veremin. Please let us know what you think and share some of the beautiful music you make!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This article was written in collaboration with &lt;a href="https://medium.com/@johncohnvt"&gt;John Cohn&lt;/a&gt; (IBM Fellow at the MIT-IBM Watson AI Lab)&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>javascript</category>
      <category>tensorflow</category>
      <category>webaudio</category>
      <category>webmidi</category>
    </item>
    <item>
      <title>Bring Machine Learning to the Browser With TensorFlow.js — Part III</title>
      <dc:creator>//va</dc:creator>
      <pubDate>Tue, 08 Jan 2019 16:16:00 +0000</pubDate>
      <link>https://forem.com/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-iii-313</link>
      <guid>https://forem.com/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-iii-313</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;cross-post from &lt;a href="https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-iii-62d2b09b10a3"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Edited 2019 Mar 11&lt;/strong&gt; to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found &lt;a href="https://gist.github.com/caisq/3fc0beb6597f42d66be806c6692f310d"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  How to go from a web friendly format to a web application
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IHH1IUtU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AquHPlo7HdaXmbx_h63ep7A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IHH1IUtU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AquHPlo7HdaXmbx_h63ep7A.jpeg" alt="" width="880" height="586"&gt;&lt;/a&gt;“assorted-color textile leaves hanging decor” by &lt;a href="https://unsplash.com/@designecologist?utm_source=medium&amp;amp;utm_medium=referral"&gt;DESIGNECOLOGIST&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to part three of a series of posts where I walk you through how TensorFlow.js makes it possible to bring machine learning to the browser. First, there’s an overview of how to &lt;a href="https://dev.to/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-i-4f4m-temp-slug-2078470"&gt;bring a pre-trained model into a browser application&lt;/a&gt;. Then you’ll find greater detail on how to &lt;a href="https://dev.to/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-ii-1h6p-temp-slug-4911024"&gt;convert your pre-trained model to a web friendly format&lt;/a&gt;. Now in this post, we step through using that web friendly model in a web application.&lt;/p&gt;

&lt;p&gt;We continue with the &lt;a href="https://github.com/IBM/MAX-Image-Segmenter"&gt;Image Segmenter&lt;/a&gt; from the &lt;a href="https://developer.ibm.com/code/exchanges/models/"&gt;Model Asset Exchange (MAX)&lt;/a&gt; converted in &lt;a href="https://dev.to/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-ii-1h6p-temp-slug-4911024"&gt;part two of this series&lt;/a&gt;. The goal here is to give you a greater understanding of TensorFlow.js and how to utilize the model we made. We will create a basic web application without much style or additional libraries. To keep this article from getting too long and keep focus on TensorFlow.js, we will skip over the HTML and non-TensorFlow.js specific JavaScript code. But you can &lt;a href="https://github.com/vabarbosa/tfjs-sandbox/tree/master/image-segmenter/demo"&gt;review the complete application&lt;/a&gt; on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kJHdS-1A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/890/1%2A67gIaFjypeft8OpBKCf0IA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kJHdS-1A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/890/1%2A67gIaFjypeft8OpBKCf0IA.png" alt="" width="880" height="396"&gt;&lt;/a&gt;image-segmenter application output&lt;/p&gt;

&lt;h3&gt;
  
  
  Importing the model
&lt;/h3&gt;

&lt;p&gt;The first step in importing the model to a browser readable format is to include the TensorFlow.js library in your HTML via script tag.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This will load the latest version of TensorFlow.js but you can also &lt;a href="https://js.tensorflow.org/#getting-started"&gt;target a specific version or load it via NPM&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the library loaded, a global &lt;code&gt;tf&lt;/code&gt; variable becomes available for accessing its API. You can load the Image Segmenter model using the &lt;a href="https://js.tensorflow.org/api/1.0.0/#loadGraphModel"&gt;loadGraphModel&lt;/a&gt; API. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: With TensorFlow.js version before 1.0 use the &lt;a href="https://js.tensorflow.org/api/0.15.1/#loadFrozenModel"&gt;loadFrozenModel&lt;/a&gt; for API.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Pass the URL to the dataflow graph using the appropriate API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Tensorflow.js version 0.x.x&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Using TensorFlow.js version 1.x.x&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Depending on the model size, loading may take some time. Once loaded, the model is ready to accept inputs and return a prediction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-processing the input
&lt;/h3&gt;

&lt;p&gt;Models will need the inputs to be of a specific type and/or size. In most cases, the input needs some sort of pre-processing before sending it to the model. For example, some models may require a one-dimensional array of a certain size while others may require more complex multi-dimensional inputs. So the input (e.g., image, sentence, etc.) would need to be pre-processed to the expected format.&lt;/p&gt;

&lt;p&gt;For the Image Segmenter, recall, when inspecting the model graph, the input was an &lt;code&gt;ImageTensor&lt;/code&gt;. It was of type and shape &lt;strong&gt;uint8[1,?,?,3]&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is a four-dimensional array of 8-bit unsigned integer values. The  &lt;strong&gt;?&lt;/strong&gt; s are placeholders and can represent any length. They would correspond to the length and width of the image. The &lt;strong&gt;1&lt;/strong&gt; corresponds to the batch size and the &lt;strong&gt;3&lt;/strong&gt; corresponds to the length of the RGB value for a given pixel.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For 8-bit unsigned integer valid values are from 0 to 255. This corresponds with an image’s pixel RGB value which also ranges from 0 to 255. So, we should be able to take an image convert it to a multi-dimension array of RGB values and send that to the model.&lt;/p&gt;

&lt;p&gt;To get a Tensor with the pixel values, you can use the &lt;a href="https://js.tensorflow.org/api/1.0.0/#browser.fromPixels"&gt;tf.browser.fromPixels&lt;/a&gt; (or &lt;a href="https://js.tensorflow.org/api/0.15.1/#fromPixels"&gt;tf.fromPixels&lt;/a&gt; for TensorFlow.js versions before 1.0) function provided by TensorFlow.js. This will produce three-dimensional array with the shape &lt;strong&gt;[?, ?, 3]&lt;/strong&gt; from the given &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement"&gt;HTMLImageElement&lt;/a&gt;. But, the Image Segmenter is expecting a four-dimensional array. To insert an extra dimension and get the shape needed, you also will need to call the &lt;a href="https://js.tensorflow.org/api/latest/#expandDims"&gt;expandDims&lt;/a&gt; function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Tensorflow.js version 0.x.x&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Using Tensorflow.js version 1.x.x&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You should now have the required input data to run the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the model
&lt;/h3&gt;

&lt;p&gt;Run the model by calling &lt;a href="https://js.tensorflow.org/api/latest/#tf.Model.predict"&gt;predict&lt;/a&gt; with the input data. The function takes the input Tensor(s) and some optional configuration parameters. It returns a prediction.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Computations are in batches. If needed you can run prediction on a single batch with the &lt;a href="https://js.tensorflow.org/api/latest/#tf.Model.predictOnBatch"&gt;predictOnBatch&lt;/a&gt; function.&lt;/p&gt;

&lt;p&gt;Depending on the model complexity, the prediction may take some time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Processing the output
&lt;/h3&gt;

&lt;p&gt;The type and shape of the output returned depends on the model. To do something meaningful extra processing of the prediction is most likely required.&lt;/p&gt;

&lt;p&gt;For the Image Segmenter, the output is a &lt;a href="https://github.com/IBM/MAX-Image-Segmenter#ibm-developer-model-asset-exchange-image-segmentation"&gt;segmentation map&lt;/a&gt; with integers between 0 and 20. The integers corresponds to one of the pre-defined labels for each pixel in the input image.&lt;/p&gt;

&lt;p&gt;In our web application, we are going to overlay the original image with the segments found. And each segment color coded. For example, RGB (192, 0, 0) for chairs and RGB (0, 64, 0) for potted plants.&lt;/p&gt;

&lt;p&gt;To achieve this, start with the &lt;a href="https://js.tensorflow.org/api/latest/#tf.Tensor.dataSync"&gt;dataSync&lt;/a&gt; Tensor function. The function will download the output tensor into a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray"&gt;TypedArray&lt;/a&gt;. Next convert the TypedArray into a regular array with &lt;code&gt;Array.from&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With a &lt;a href="https://github.com/vabarbosa/tfjs-sandbox/blob/master/image-segmenter/assets/color-map.json"&gt;color map&lt;/a&gt;, go through the converted array and assign the appropriate color to each segment. Then take this data to create the desired overlay image.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You can now add the resulting image to your HTML page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completing the web application
&lt;/h3&gt;

&lt;p&gt;To complete the application, add buttons to load the model, upload an image, and run the model. Also, add the code to overlay the input image and output prediction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qBlPWPMo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/915/1%2AhS4mY8p_bmcv7s32jxg_fg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qBlPWPMo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/915/1%2AhS4mY8p_bmcv7s32jxg_fg.png" alt="" width="880" height="414"&gt;&lt;/a&gt;running image-segmenter application&lt;/p&gt;

&lt;p&gt;You can find the completed project &lt;a href="https://github.com/vabarbosa/tfjs-sandbox/tree/master/image-segmenter"&gt;here&lt;/a&gt;. The repository contains the demo web application. It also includes the web friendly format from the &lt;a href="https://github.com/tensorflow/tfjs-converter"&gt;tensorflowjs_converter&lt;/a&gt;. You will also find a Jupyter notebook to play with the Image Segmenter in Python.&lt;/p&gt;

&lt;h3&gt;
  
  
  Machine Learning in JavaScript
&lt;/h3&gt;

&lt;p&gt;The ability to use machine learning technology on the web is often limited. Creating and training some models involve massive data and intense computations. The browser may not be the ideal environment. But, an exciting use case is to take models trained elsewhere then import and run them in the browser.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://js.tensorflow.org"&gt;TensorFlow.js&lt;/a&gt; you can convert some of these models into a web friendly format. Then bring them into your web application. Making machine learning in JavaScript that much easier.&lt;/p&gt;

&lt;p&gt;To check out even more interesting applications take a look at &lt;a href="https://medium.com/u/6c8f3f54d771"&gt;Nick Kasten&lt;/a&gt;’s &lt;a href="https://github.com/CODAIT/magicat"&gt;magicat&lt;/a&gt; or the TensorFlow.js version of his &lt;a href="https://github.com/kastentx/MAX-Image-Segmenter-Web-App/tree/TFJS"&gt;Magic Cropping Tool&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>tensorflow</category>
      <category>javascript</category>
      <category>machinelearning</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Bring Machine Learning to the Browser With TensorFlow.js — Part II</title>
      <dc:creator>//va</dc:creator>
      <pubDate>Tue, 04 Dec 2018 16:31:01 +0000</pubDate>
      <link>https://forem.com/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-ii-1e3i</link>
      <guid>https://forem.com/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-ii-1e3i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;cross-post from &lt;a href="https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-ii-7555ed9a999e" rel="noopener noreferrer"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Edited 2019 Mar 11&lt;/strong&gt; to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found &lt;a href="https://gist.github.com/caisq/3fc0beb6597f42d66be806c6692f310d" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Converting a pre-trained model to a web friendly format
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQwZ1EoUo6slEwlJVpNwTnw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQwZ1EoUo6slEwlJVpNwTnw.jpeg"&gt;&lt;/a&gt;assorted-color leaves by &lt;a href="https://unsplash.com/@chrislawton?utm_source=medium&amp;amp;utm_medium=referral" rel="noopener noreferrer"&gt;Chris Lawton&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve &lt;a href="https://dev.to/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-i-4f4m-temp-slug-2078470"&gt;been following along&lt;/a&gt;, you should already have a high level understanding of how to bring a pre-trained model into a browser application. Now, let’s look at the first steps in this process in greater detail.&lt;/p&gt;

&lt;p&gt;Before you can convert a pre-trained model to a web-friendly format and bring it to the browser, you first need a model. A great first model to start learning with is the &lt;a href="https://github.com/IBM/MAX-Image-Segmenter" rel="noopener noreferrer"&gt;Image Segmenter&lt;/a&gt; from the &lt;a href="https://developer.ibm.com/code/exchanges/models/" rel="noopener noreferrer"&gt;Model Asset Exchange (MAX)&lt;/a&gt;. You can deploy and run the Image Segmenter model through Kubernetes or Docker Hub. To get an idea of what it does you can check out &lt;a href="https://medium.com/u/6c8f3f54d771" rel="noopener noreferrer"&gt;Nick Kasten&lt;/a&gt;’s &lt;a href="https://developer.ibm.com/patterns/max-image-segmenter-magic-cropping-tool-web-app/" rel="noopener noreferrer"&gt;Magic Cropping Tool&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the Model
&lt;/h3&gt;

&lt;p&gt;You can begin by downloading and extracting the &lt;a href="http://max-assets.s3-api.us-geo.objectstorage.softlayer.net/deeplab/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz" rel="noopener noreferrer"&gt;model files&lt;/a&gt; used in the MAX Image Segmenter. The extracted contents contain a frozen model graph. &lt;a href="https://cv-tricks.com/how-to/freeze-tensorflow-models" rel="noopener noreferrer"&gt;Frozen graphs&lt;/a&gt; encapsulates all required model data in a single file (.pb extension).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The TensorFlow.js converter supports Keras (i.e., HDF5) and TensorFlow (e.g., frozen graphs, SavedModel) models.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A few other model formats you may encounter include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.tensorflow.org/guide/checkpoints" rel="noopener noreferrer"&gt;Checkpoints&lt;/a&gt; which contain information needed to save the current state of the model. Then resume training after loading the checkpoint. Checkpoints are not supported by the converter.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md" rel="noopener noreferrer"&gt;SavedModel&lt;/a&gt; which is the universal serialization format for TensorFlow. Unlike checkpoints, SavedModels store the model data in a language-neutral format.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hierarchical_Data_Format" rel="noopener noreferrer"&gt;HDF5&lt;/a&gt; which is the format used by Keras to store model data. It is a grid format popular for storing multi-dimensional arrays of numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Know the Model
&lt;/h3&gt;

&lt;p&gt;It is good practice to review and understand a model before you use it. You don’t need to know every little detail about the model, but a good start is to get to know the &lt;a href="https://www.tensorflow.org/extend/tool_developers/" rel="noopener noreferrer"&gt;model’s format&lt;/a&gt;, inputs, and outputs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In most cases, you will have to pre-process the input(s) to the model, as well as, process the model output(s).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Learn about the model’s inputs, outputs, and operations by inspecting the model’s graph. One useful and easy-to-use visual tool for viewing machine learning models is &lt;a href="https://github.com/lutzroeder/Netron" rel="noopener noreferrer"&gt;Netron&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To inspect the Image Segmenter model, open the extracted &lt;code&gt;frozen_inference_graph.pb&lt;/code&gt; file in Netron. You can zoom out to see the scope and size of the model. Likewise, you can zoom in to specific nodes/operations on the graph.&lt;/p&gt;

&lt;p&gt;Without clicking on any nodes, click on the hamburger/menu icon to see the model’s properties (e.g., number of operators, input type, etc.). In addition, click on a specific node to view the properties. Alternatively, you can enter &lt;code&gt;CTRL+F&lt;/code&gt; to open the search panel and type a specific node to jump to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A4RfJ7WTT6LBaSB0AivsyWg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A4RfJ7WTT6LBaSB0AivsyWg.png"&gt;&lt;/a&gt;Image Segmenter model properties&lt;/p&gt;

&lt;p&gt;The input for the Image Segmenter is an &lt;code&gt;ImageTensor&lt;/code&gt; of type &lt;strong&gt;uint8[1,?,?,3]&lt;/strong&gt;. This is a four-dimensional array of 8-bit unsigned integer values in the shape of &lt;strong&gt;1,?,?,3&lt;/strong&gt;. The  &lt;strong&gt;?&lt;/strong&gt; s are placeholders and can represent any length. They would correspond to the length and width of the image. The &lt;strong&gt;1&lt;/strong&gt; corresponds to the batch size and the &lt;strong&gt;3&lt;/strong&gt; corresponds to the length of the RGB value for a given pixel, which is three numbers.&lt;/p&gt;

&lt;p&gt;Clicking on the last node (&lt;code&gt;Slice&lt;/code&gt;), you get its name (i.e., &lt;code&gt;SemanticPredictions&lt;/code&gt;) and attributes. The name is important to remember. You will need to provide it to the converter tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ADGiFs1kbq0JIJjMxb2NFBQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ADGiFs1kbq0JIJjMxb2NFBQ.png"&gt;&lt;/a&gt;Image Segmenter node properties&lt;/p&gt;

&lt;p&gt;Other available options to view information about a graph are the &lt;a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md#inspecting-graphs" rel="noopener noreferrer"&gt;summarize_graph&lt;/a&gt; and &lt;a href="https://www.tensorflow.org/guide/graph_viz" rel="noopener noreferrer"&gt;TensorBoard&lt;/a&gt; tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Change the Model
&lt;/h3&gt;

&lt;p&gt;You are now ready to run &lt;a href="https://github.com/tensorflow/tfjs-converter" rel="noopener noreferrer"&gt;tensorflowjs_converter&lt;/a&gt; to get your web friendly format.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The converter is available through the command line after installing the &lt;code&gt;tensorflowjs&lt;/code&gt; Python package&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To convert the Image Segmenter specify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SemanticPredictions&lt;/strong&gt; for the &lt;code&gt;output_node_names&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tf_frozen_model&lt;/strong&gt; for the &lt;code&gt;input_format&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;file path to the frozen graph&lt;/li&gt;
&lt;li&gt;directory path to store the converted model&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;If successful the &lt;code&gt;tensorflowjs_converter&lt;/code&gt; outputs the dataflow graph (&lt;code&gt;model.json&lt;/code&gt;) and shards of binary weight files. The shard files are small in size to support easier browser caching.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If using &lt;code&gt;tensorflowjs_converter&lt;/code&gt; version before 1.0, the output produced includes the graph (&lt;code&gt;tensorflowjs_model.pb&lt;/code&gt;), weights manifest (&lt;code&gt;weights_manifest.json&lt;/code&gt;), and the binary shards files.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Conversion can fail because of unsupported operations. The converted model may also be too large to be useful. In this case, there are steps you may be able to take.&lt;/p&gt;

&lt;p&gt;To make the web friendly model smaller you can convert only part of the model graph. Any unused nodes or nodes used only during training can get stripped. It is not needed with the Image Segmenter model, but if you had to strip unused nodes, it would look something like the following:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This is also useful for failures from unsupported operations. For some unsupported operations use &lt;a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/strip_unused_lib.py#L32" rel="noopener noreferrer"&gt;strip_unused&lt;/a&gt; to bypass the operation. You can then convert the stripped graph to get a web friendly version.&lt;/p&gt;

&lt;p&gt;This helps get the model converted, but also adds extra work. You may need to implement the unsupported operation outside of the model. This will apply it to the input(s) to the model and/or output(s) from the model.&lt;/p&gt;

&lt;p&gt;More options to further &lt;a href="https://www.tensorflow.org/lite/performance/model_optimization" rel="noopener noreferrer"&gt;optimize&lt;/a&gt; the model are available.&lt;/p&gt;

&lt;h3&gt;
  
  
  More to Come…
&lt;/h3&gt;

&lt;p&gt;Your pre-trained model should now be converted to the format supported by TensorFlow.js. You can load the converted format and run it in a browser environment.&lt;/p&gt;

&lt;p&gt;Stay tuned for the &lt;a href="https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-iii-62d2b09b10a3" rel="noopener noreferrer"&gt;follow up to this article&lt;/a&gt; to learn how to take the converted model and use it in a web application.&lt;/p&gt;




</description>
      <category>tensorflow</category>
      <category>machinelearning</category>
      <category>javascript</category>
      <category>python</category>
    </item>
    <item>
      <title>Bring Machine Learning to the Browser with TensorFlow.js — Part I</title>
      <dc:creator>//va</dc:creator>
      <pubDate>Fri, 26 Oct 2018 13:31:01 +0000</pubDate>
      <link>https://forem.com/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-i-5f6b</link>
      <guid>https://forem.com/vabarbosa/bring-machine-learning-to-the-browser-with-tensorflowjspart-i-5f6b</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;cross-post from &lt;a href="https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-i-16924457291c"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Edited 2019 Mar 11&lt;/strong&gt; to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found &lt;a href="https://gist.github.com/caisq/3fc0beb6597f42d66be806c6692f310d"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Applying a web friendly format to a pre-trained model resulting in a web application.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fw1JUf0P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AgA9FhjIFE9h9BbrZ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fw1JUf0P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AgA9FhjIFE9h9BbrZ" alt="" width="880" height="586"&gt;&lt;/a&gt;“assorted-color leaf hanging decor” by &lt;a href="https://unsplash.com/@chrislawton?utm_source=medium&amp;amp;utm_medium=referral"&gt;Chris Lawton&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://js.tensorflow.org"&gt;TensorFlow.js&lt;/a&gt; brings machine learning and its possibilities to JavaScript. It is an open source library built to create, train, and run machine learning models in the browser (and Node.js).&lt;/p&gt;

&lt;p&gt;Training and building complex models can take a considerable amount of resources and time. Some models require massive amounts of data to provide acceptable accuracy. And, if computationally intensive, may require hours or days of training to complete. Thus, you may not find the browser to be the ideal environment for building such models.&lt;/p&gt;

&lt;p&gt;A more appealing use case is importing and running existing models. You train or get models trained in powerful, specialized environments then you import and run the models in the browser for &lt;a href="https://medium.com/ibm-watson-data-lab/taking-selfies-and-more-to-the-next-level-with-open-source-deep-learning-models-de9ac8da4480"&gt;impressive user experiences&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Converting the model
&lt;/h3&gt;

&lt;p&gt;Before you can use a pre-trained model in TensorFlow.js, the model needs to be in a web friendly format. For this, TensorFlow.js provides the &lt;a href="https://github.com/tensorflow/tfjs-converter"&gt;tensorflowjs_converter&lt;/a&gt; tool. The tool converts TensorFlow and Keras models to the required web friendly format. The converter is available after you install the &lt;code&gt;tensorflowjs&lt;/code&gt; Python package.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;tensorflowjs_converter&lt;/code&gt; expects the model and the output directory as inputs. You can also pass optional parameters to further customize the conversion process.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The output of &lt;code&gt;tensorflowjs_converter&lt;/code&gt; is a set of files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;model.json&lt;/code&gt; — the dataflow graph&lt;/li&gt;
&lt;li&gt;A group of binary weight files called shards. Each shard file is small in size for easier browser caching. And the number of shards depends on the initial model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--afdQG5zE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ASWgnSA-21aDkddoQTLaD2A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--afdQG5zE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ASWgnSA-21aDkddoQTLaD2A.png" alt="" width="806" height="382"&gt;&lt;/a&gt;tensorflowjs_converter output files&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If using &lt;code&gt;tensorflowjs_converter&lt;/code&gt; version before 1.0, output produced includes the graph (&lt;code&gt;tensorflowjs_model.pb&lt;/code&gt;), weights manifest (&lt;code&gt;weights_manifest.json&lt;/code&gt;), and the binary shards files.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Run model run
&lt;/h3&gt;

&lt;p&gt;Once converted, the model is ready to load into TensorFlow.js for predictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Tensorflow.js version 0.x.x&lt;/strong&gt;:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Using TensorFlow.js version 1.x.x&lt;/strong&gt;:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The imported model is the same as models trained and created with TensorFlow.js.&lt;/p&gt;

&lt;h3&gt;
  
  
  Convert all models?
&lt;/h3&gt;

&lt;p&gt;You may find it tempting to grab any and all models, convert them to the web friendly format, and run them in the browser. But this is not always possible or recommended. There are several factors for you to keep in mind.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;tensorflowjs_converter&lt;/code&gt; command can only convert Keras and TensorFlow models. Some supported model formats include &lt;a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md"&gt;SavedModel&lt;/a&gt;, &lt;a href="https://www.tensorflow.org/extend/tool_developers/"&gt;Frozen Model&lt;/a&gt;, and &lt;a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model"&gt;HDF5&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;TensorFlow.js does not support all TensorFlow operations. It currently has a limited set of &lt;a href="https://github.com/tensorflow/tfjs-converter/blob/master/docs/supported_ops.md"&gt;supported operations&lt;/a&gt;. As a result, the converter will fail if the model contains operations not supported.&lt;/p&gt;

&lt;p&gt;Thinking and treating the model as a black box is not always enough. Because you can get the model converted and produce a web friendly model does not mean all is well.&lt;/p&gt;

&lt;p&gt;Depending on a model’s size or architecture, its performance could be less than desirable. &lt;a href="https://www.tensorflow.org/performance/model_optimization"&gt;Further optimization&lt;/a&gt; of the model is often required. In most cases, you will have to pre-process the input(s) to the model, as well as, process the model output(s). So, needing some understanding or inner workings of the model is almost a given.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting to know your model
&lt;/h3&gt;

&lt;p&gt;Presumably you have a model available to you. If not, resources exist with an ever growing collection of pre-trained models. A couple of them include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/tensorflow/models"&gt;TensorFlow Models&lt;/a&gt; —a set of official and research models implemented in TensorFlow&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.ibm.com/code/exchanges/models/"&gt;Model Asset Exchange&lt;/a&gt; —a set of deep learning models covering different frameworks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These resources provide the model for you to download. They also can include information about the model, useful assets, and links to learn more.&lt;/p&gt;

&lt;p&gt;You can review a model with tools such as &lt;a href="https://www.tensorflow.org/guide/summaries_and_tensorboard"&gt;TensorBoard&lt;/a&gt;. It’s &lt;a href="https://www.tensorflow.org/guide/graph_viz"&gt;graph visualization&lt;/a&gt; can help you better understand the model.&lt;/p&gt;

&lt;p&gt;Another option is &lt;a href="https://github.com/lutzroeder/Netron"&gt;Netron&lt;/a&gt;, a visualizer for deep learning and machine learning models. It provides an overview of the graph and you can inspect the model’s operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sYmO8m1c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AK-AuH-uThtUeJJxWPMrMAQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sYmO8m1c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AK-AuH-uThtUeJJxWPMrMAQ.png" alt="" width="880" height="926"&gt;&lt;/a&gt;visualizing a model with Netron&lt;/p&gt;

&lt;h3&gt;
  
  
  To be continued…
&lt;/h3&gt;

&lt;p&gt;Stay tuned for the follow up to this article to learn how to pull this all together. You will step through this process in greater detail with an actual model and you will &lt;a href="https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-ii-7555ed9a999e"&gt;take a pre-trained model into web friendly format&lt;/a&gt; and &lt;a href="https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-iii-62d2b09b10a3"&gt;end up with a web application&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>tensorflow</category>
      <category>javascript</category>
      <category>machinelearning</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
