<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Charlie Harrington</title>
    <description>The latest articles on Forem by Charlie Harrington (@whatrocks).</description>
    <link>https://forem.com/whatrocks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/whatrocks"/>
    <language>en</language>
    <item>
      <title>An Afternoon with Arduino</title>
      <dc:creator>Charlie Harrington</dc:creator>
      <pubDate>Fri, 29 Mar 2019 18:25:20 +0000</pubDate>
      <link>https://forem.com/whatrocks/an-afternoon-with-arduino-42fc</link>
      <guid>https://forem.com/whatrocks/an-afternoon-with-arduino-42fc</guid>
      <description>&lt;p&gt;For the last eight years or so, I've been carrying around an &lt;a href="https://www.arduino.cc/" rel="noopener noreferrer"&gt;Arduino&lt;/a&gt; (not literally on my person, but, you know, amongst my treasures), waiting for just the right time to start tinkering with it.&lt;/p&gt;

&lt;p&gt;That time, it turns out, was yesterday afternoon. This post outlines some of things I've learned so far.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ard-what-now?
&lt;/h3&gt;

&lt;p&gt;An Arduino is a &lt;strong&gt;microcontroller&lt;/strong&gt; board. The board contains a CPU (central processing unit) along with some I/O (input / output) connections. You can think about it as a small circuit - a circuit that happens to contain a programmable computer on a chip. A chip that you can learn to easily program.&lt;/p&gt;

&lt;p&gt;Here's a picture of my Arduino (of the &lt;a href="https://store.arduino.cc/usa/arduino-uno-rev3" rel="noopener noreferrer"&gt;Uno&lt;/a&gt; varietal):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4fvs83wn7ftsc3et62cz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4fvs83wn7ftsc3et62cz.jpg" alt="Uno"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I love the MADE IN ITALY mark in the upper left.&lt;/p&gt;

&lt;p&gt;The microprocessor (the computer chip) is the long flat black rectangle near the bottom-right corner of the device. It has 28 pins (14 on each side). Along the top edge of the Arduino you can see a strip of black input pins - these are the Arduino's "digital" pins (meaning that they can either be "on" or "off"). Below the CPU on the bottom right you can see another 5 "analog" pins (meaning that they can receive analog / continuous signals), and then a few "power" pins to the left to provide voltage, ground, and some other stuff that I don't know about yet.&lt;/p&gt;

&lt;h4&gt;
  
  
  Arduino vs Raspberry Pi
&lt;/h4&gt;

&lt;p&gt;At this point, you might be asking yourself, "How is an Arduino different from a &lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi&lt;/a&gt;?"&lt;/p&gt;

&lt;p&gt;It's a good question, since both are affordable, adorable, tiny little computers that you can buy for about $30 bucks or less. But the Raspberry Pi is a full-on Linux computer. An Arduino is... not. Instead, the Arduino computer holds just one program at a time. It stores this program in durable memory, so that you can turn the Arduino on and off, and it will still "remember" its latest program. It's more like a single-purpose device -- except you can dream up and build that single-purpose as many times as you want.&lt;/p&gt;

&lt;h4&gt;
  
  
  Open-source roots
&lt;/h4&gt;

&lt;p&gt;The company behind Arduino is a non-profit and the Arduino itself is open-source - which means that anyone can build an Arduino board themselves. The original idea behind Arduino was to make a simple device that designers and artists could use for rapid prototyping of &lt;em&gt;physical computing&lt;/em&gt; projects that use &lt;strong&gt;sensors&lt;/strong&gt; (aka &lt;em&gt;inputs&lt;/em&gt; like a keyboard or mouse or motion detectors) and &lt;strong&gt;actuators&lt;/strong&gt; (aka &lt;em&gt;outputs&lt;/em&gt; like a display or printer or lights) to interact and communicate with us human beans. Cool.&lt;/p&gt;

&lt;p&gt;Arduino is closely tied to the &lt;a href="https://processing.org/" rel="noopener noreferrer"&gt;Processing community&lt;/a&gt;. In fact, that's a bit of an understatement, since you actually write Processing code when writing programs for the Arduino -- and, just like in Processing, these programs are also called Sketches. I was happy to see this, since one of my earliest computing classes was &lt;a href="https://twitter.com/blprnt" rel="noopener noreferrer"&gt;Jer Thorp&lt;/a&gt;'s &lt;a href="http://blog.blprnt.com/workshops" rel="noopener noreferrer"&gt;Introduction to Processing course&lt;/a&gt; (which I unabashedly recommend, by the way).&lt;/p&gt;

&lt;p&gt;As I mentioned, anyone can download the open-source schematics for Arduino and build a board themeselves with basic components. But if you'd like to make your tinkering lifestyle easier, then I suggest picking up a pre-assembled Arduino from a retailer like &lt;a href="https://www.makershed.com/" rel="noopener noreferrer"&gt;Makershed&lt;/a&gt; or &lt;a href="https://www.adafruit.com" rel="noopener noreferrer"&gt;Adafruit&lt;/a&gt;. The kit I bought (eight years ago) is the &lt;a href="https://www.makershed.com/products/make-getting-started-with-arduino-kit-special-edition" rel="noopener noreferrer"&gt;MAKE: Getting Started With Arduino Kit&lt;/a&gt; -- the current version (v3) appears to be retailing for $79.99 bucks. IMHO, it's definitely worth it -- the kit comes along with a bunch of goodies that help you get started right away, like a breadboard, colorful wires, clickable switches, LEDs, sensors, and a friendly introductory book. I also picked up &lt;a href="https://www.amazon.com/Arduino-Quick-Start-Guide-Quick-start-Guides/dp/1934356662" rel="noopener noreferrer"&gt;Arduino: A Quick-Start Guide&lt;/a&gt; by &lt;a href="https://twitter.com/maik_schmidt" rel="noopener noreferrer"&gt;Maik Schmidt&lt;/a&gt;, and I've been enjoying this book as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fun with LEDs
&lt;/h3&gt;

&lt;p&gt;I believe you're legally required to write a program that blinks an LED on and off as your first project with Arduino.&lt;/p&gt;

&lt;p&gt;If you asked me a few days ago about LEDs - yeah, sure, I know about LEDs. Those little red lights in things like my Game Boy. Stands for... light emitting... diode.&lt;/p&gt;

&lt;p&gt;Great, you continue, what's a diode?&lt;/p&gt;

&lt;p&gt;Um.&lt;/p&gt;

&lt;p&gt;This is already one of the fun things about playing with Arduino. There are all sorts of basic electronics stuff that I sorta know about, but couldn't explain to a five-year-old or to a &lt;a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging" rel="noopener noreferrer"&gt;rubber duck on my desk&lt;/a&gt;. Or just don't know at all. But Arduino is helped me tackle these topics in a practical, tangible way.&lt;/p&gt;

&lt;p&gt;So, let's take a look at an LED together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Faq63yj6k7zmgbbn62d6y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Faq63yj6k7zmgbbn62d6y.jpg" alt="LED"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See the longer pin / leg sticking out of the red part? That's the &lt;strong&gt;anode&lt;/strong&gt; terminal. The anode is the positive end of the LED. The shorter leg is the &lt;strong&gt;cathode&lt;/strong&gt; - the negative side. Electrons will flow from the anode to the cathode when connected. You'll want to connect the positive end to something providing voltage, and the negative end needs to be connected to ground. All diodes are polarized, meaning they have these distinct positive and negative sides. And LEDs (light emitting diodes) happen to provide illumination when they're connected to an active circuit.&lt;/p&gt;

&lt;p&gt;And they can be lots of pretty colors, too.&lt;/p&gt;

&lt;p&gt;Okay, so here's our legally-required sketch for blinking an LED connected to digital pin 13 every half-second.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;LED&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;pinMode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;OUTPUT&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;HIGH&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LOW&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty simple, right? We first declare a constant variable for the pin we're using. The &lt;code&gt;setup()&lt;/code&gt; function will run once per program, right before the &lt;code&gt;loop()&lt;/code&gt; kicks off its infinite loop, so we'll just let the Arduino know that we want to set pin 13 to &lt;code&gt;OUTPUT&lt;/code&gt; mode. And then during our infinite loop, we'll toggle the voltage to the pin by passing &lt;code&gt;HIGH&lt;/code&gt; (5 volts) or &lt;code&gt;LOW&lt;/code&gt; (0 volts) to our pin 13 using the &lt;code&gt;digitalWrite&lt;/code&gt; function, pausing 500 milliseconds between these operations.&lt;/p&gt;

&lt;p&gt;If you're coming from the Processing world, then this program structure of &lt;code&gt;setup()&lt;/code&gt; and &lt;code&gt;loop()&lt;/code&gt; should look very familiar, since it's literally the same.&lt;/p&gt;

&lt;p&gt;The Arduino IDE provides an easy way to verify your programs compile before flashing them over to your actual Arduino, so I suggest clicking the &lt;code&gt;Verify&lt;/code&gt; button first. This will catch syntax errors, like pesky missing semi-colons.&lt;/p&gt;

&lt;p&gt;Next, we can set up our physical device.&lt;/p&gt;

&lt;p&gt;I'm going to stick the LED into the Arduino, with the anode leg going into pin 13 and the cathode leg into ground. Note here that pin 13 is a special pin on the Arduino that has a resister built-in. If you try this with any other pin the Arduino, then the LED will burn out.&lt;/p&gt;

&lt;p&gt;Finally, we can send our program from our computer to the Arduino over a USB connection by clicking the &lt;code&gt;Upload&lt;/code&gt; button in the IDE. Your Arduino should flash happily once its complete, and then it's off to the infinite races.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F504ctt601fl3mxoffpu2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F504ctt601fl3mxoffpu2.gif" alt="Binary Counting"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Look at that blinker. Pretty great, huh? Note that this gif definitely speeds things up a bit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting the "S" in USB
&lt;/h3&gt;

&lt;p&gt;So, as I continued building stuff, I inevitably found myself wanting to &lt;code&gt;console.log&lt;/code&gt; the heck out of a program that wasn't working.&lt;/p&gt;

&lt;p&gt;Let's talk about printing stuff with Arduino.&lt;/p&gt;

&lt;p&gt;Your Arduino is connected to your computer via a USB cable. USB. USB. That has to stand for something, right? It does. It stands for "Universal Serial Bus." USB is a quote "industry standard" for communications between computers and peripherals. If you think back really hard to the time of Captain Marvel or even earlier, you might remember other ways that we connected peripherals to our computers -- like an dot matrix printer's parallel port or a PS/2 keyboard port. Well, in the time since Carol Danvers left us here to fend for ourselves, USB has taken over our hearts, minds, and wallets. But we're still using a "serial connection" when we're using USB devices - so we'll need to use the serial protocol to communicate with our Arduino.&lt;/p&gt;

&lt;p&gt;In other words, if we want to send or receive info from our Arduino program, we need to establish a serial connection with the device. Here's how you do that in a Processing sketch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;BAUD_RATE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;9600&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Serial&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;begin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BAUD_RATE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Serial&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, world!"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Baud rate, huh? I know this &lt;code&gt;baud&lt;/code&gt; term, too. Modems had baud rates, IIRC. Some Wikipedia-ing and Google-ing reveal that baud rates are the rates at which information is transferred in a serial channel. In this case, with a baud rate of 9600, we're transferring a max of 9600 bits per second. 9600 happens to be the standard baud rate for Arduinos, but I believe you can choose a different rate.&lt;/p&gt;

&lt;p&gt;To view your "console", you can click the "Serial Monitor" button in the IDE. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fynkzzujsyg0lpk5cmo4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fynkzzujsyg0lpk5cmo4h.png" alt="Hello"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to viewing received information, you can also send messages back to the Arduino in this monitor using the text input on the top panel and the &lt;code&gt;Send&lt;/code&gt; button. For example, you might write a program that toggled an LED on or off based on a specific input key.&lt;/p&gt;

&lt;p&gt;What if you don't want to use the Serial Monitor in the Arduino IDE? Maybe it's time to let the old ways die. I agree. If you're on a Mac, then you can try running the &lt;code&gt;screen&lt;/code&gt; command from your terminal, specifying both the name of your serial connection to your Arduino and the baud rate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;screen &amp;lt;name_of_serial_connection&amp;gt; 9600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case, the name of the connection was &lt;code&gt;/dev/cu.usbmodem14101&lt;/code&gt;, which you can find in the &lt;code&gt;Tools/Port&lt;/code&gt; menu of the Arduino IDE.&lt;/p&gt;

&lt;p&gt;Word of warning, however. If you close this terminal window, it won't close the sesssion, and you'll be unable to Upload new programs to your Arduino. This is called a "detached screen" and it's annoying. You need to quit the screen somehow, and you can use this command to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;screen &lt;span class="nt"&gt;-X&lt;/span&gt; &lt;span class="nt"&gt;-S&lt;/span&gt; &amp;lt;name_of_session&amp;gt; quit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Oh, to get the name of the detached session, you can type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;screen &lt;span class="nt"&gt;-ls&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This whole serial communications thing opens up some interesting ideas, since you can have two way comms between your Arduino and something else. Forget Alexa. Not-okay, Google. Go away, Siri. Now you can build your own talking robotic best friend, instead. Hopefully gets some gears turning for you, too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Counting in binary with LEDs
&lt;/h3&gt;

&lt;p&gt;In general, life-goal-wise, I've been trying to get better at thinking and counting in binary, so I decided to build a little binary counter for my next Arduino project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;LED_BIT0&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;LED_BIT1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;LED_BIT2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;LED_BIT3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;pinMode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;OUTPUT&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;pinMode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;OUTPUT&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;pinMode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;OUTPUT&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;pinMode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;OUTPUT&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="n"&gt;output_result&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;output_result&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B0001&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B0010&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B0100&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LED_BIT3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6jxxawvo5qm6lwk2gv75.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6jxxawvo5qm6lwk2gv75.gif" alt="Binary Counting"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm not sure why the red LED isn't as bright as the other three LEDs. I tried swapping it out with another LED to no avail. But, hey, other than that, this thing works!&lt;/p&gt;

&lt;p&gt;I also learned that breadboards are great. Being able to run all the cathode sides of the LEDs to the bottom negative row of the breadboard, and then only connecting that row once to the Arduino's ground port is pretty darn helpful. I have more to learn and appreciate here, for sure.&lt;/p&gt;

&lt;p&gt;Also, this is the first time that I've really leveraged the power of the bitwise-and operator. I'm taking my &lt;code&gt;result&lt;/code&gt; and bitwise-and it with a binary number that represents a binary digit (1's, 2's, 4's, 8's) for each of the LEDs. The bitwise-and operation returns &lt;code&gt;true&lt;/code&gt; if &lt;code&gt;result&lt;/code&gt; and our binary number both contain a &lt;code&gt;1&lt;/code&gt; for the given binary digit. For example, let's look at the number &lt;code&gt;3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B0001&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B0010&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B0100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// false&lt;/span&gt;
&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;B1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final trick here is that &lt;code&gt;digitalWrite&lt;/code&gt; function transforms &lt;code&gt;true&lt;/code&gt; boolean values into &lt;code&gt;HIGH&lt;/code&gt; (turn on the LED) and &lt;code&gt;false&lt;/code&gt; into &lt;code&gt;LOW&lt;/code&gt; (turn off the LED). So, for the number &lt;code&gt;3&lt;/code&gt; the LED for the &lt;code&gt;1&lt;/code&gt;'s digit and the &lt;code&gt;2&lt;/code&gt;'s digit should be lit, and the &lt;code&gt;4&lt;/code&gt;'s and &lt;code&gt;8&lt;/code&gt;'s should be off.&lt;/p&gt;

&lt;p&gt;That's pretty awesome and makes this code very concise. There's much more to explore here for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  More tinkering
&lt;/h3&gt;

&lt;p&gt;So, after a mere afternoon, I've learned a ton and had quite a bit of fun along the way.&lt;/p&gt;

&lt;p&gt;What's next, you ask? Well, resistors are still perplexing. I'm not sure yet how to determine what level of resistence is needed for a given situation. I've already fried an LED (a delightful puff of smoke wisps out during its last gasp of life), likely for this very reason. It's also really hard to read those colorful bands to try to determine their resistance level. This seems like it could be a great little computer vision / deep learning app. Or perhaps I should just use my multimeter more regularly.&lt;/p&gt;

&lt;p&gt;I'm also thinking more about the difference between analog and digital signals. Digital is binary (either on or off), whereas analog is continuous. Most of what we observe in life is an analog signal. So when we choose to digitize them, we need to choose specific moments to "sample" the values of the continuous signal. The Schmidt book explained that an audio CD takes a sample every 44,100 per second (or 44.1 kHz). Maybe this is why vinyl is back.&lt;/p&gt;

&lt;p&gt;I thinking that my obvious next project here with Arduino is to make an alarm clock with binary numbers. There are tons of neat examples of this project across the web, and I think it could be a good way to learn / improve my soldering skills, as well as my quick mental binary counting, especially while groggy in the middle of the night.&lt;/p&gt;

</description>
      <category>c</category>
      <category>processing</category>
      <category>arduino</category>
      <category>diy</category>
    </item>
    <item>
      <title>Generating your own graduation speeches with Markov Chains</title>
      <dc:creator>Charlie Harrington</dc:creator>
      <pubDate>Tue, 25 Sep 2018 18:37:59 +0000</pubDate>
      <link>https://forem.com/whatrocks/generating-your-own-graduation-speeches-with-markov-chains-55o7</link>
      <guid>https://forem.com/whatrocks/generating-your-own-graduation-speeches-with-markov-chains-55o7</guid>
      <description>&lt;p&gt;Imagine this. You're the founder slash CEO slash product-visionary of a three month-old electric scooter startup. In between transforming the global transportation market and dabbling in your first angel investments (you know, just to get your feet wet), you've been asked by your beloved alma mater to deliver this year's commencement address to the graduating class.&lt;/p&gt;

&lt;p&gt;That's right, you unfortunately didn't drop out of college, so you've already lost that convenient narrative thread. You agree to give the speech and assure yourself you'll be fine. You've got no problem waxing on about local city scooter politics or the unit-economics of the secondary charging market. That should give you a good five minutes or so. Maybe you can even ride a scooter on stage towards the podium for a laugh? Write that down, you mutter to no one in particular.&lt;/p&gt;

&lt;p&gt;But as the graduation date approaches, the pressure's rising. Your plucky Chief-of-Staff slash travel agent slash only real friend anymore asks you how the speech draft's going, and you just smile and nod, "Great, Sam. It's about connecting the dots. In reverse."&lt;/p&gt;

&lt;p&gt;"You mean like that Steve Jobs one? Stanford, 2005. I've watched the YouTube video like a million times."&lt;/p&gt;

&lt;p&gt;"Oh, no, not like that. It's more about the benefits of failure and, you know, the importance of imagination."&lt;/p&gt;

&lt;p&gt;"J.K. Rowling, Harvard, 2008. C'mon, you're not going to make that world's largest Gryffindor reunion joke, too. Are you? Say no, please. Say no right now."&lt;/p&gt;

&lt;p&gt;"No, course not. Anyway, isn't it about time for my daily transcendental gratitude journaling? You almost made me miss it again. Give me one of your pens. Not that one, the other one."&lt;/p&gt;

&lt;p&gt;You sit down at the communal lunch table, a glass of ginger green-tea kombucha and a terrifying blank piece of paper in front of you.&lt;/p&gt;

&lt;p&gt;Right as – you swear – you were about to start writing the greatest commencement address of all time, one of those newfangled data scientists walks over and sits down, directly across from you. Can't they see that you're in-the-zone? And why are you paying them so much if they're just sitting around all the time?&lt;/p&gt;

&lt;p&gt;"I think I can help you."&lt;/p&gt;

&lt;p&gt;You glance up at them, barely.&lt;/p&gt;

&lt;p&gt;"I overheard your conversation with Sam. Don't give me that look, it's an open office layout – probably your idea, too. Anyway, I think I can help you out. You need a speech. A good one. And quickly."&lt;/p&gt;

&lt;p&gt;You drop your (Sam's) pen on the table, cross your arms, and lean backwards.&lt;/p&gt;

&lt;p&gt;"I'm listening."&lt;/p&gt;

&lt;p&gt;"Have you heard of Markov chains?"&lt;/p&gt;

&lt;h2&gt;
  
  
  A little bit of Markov in your life
&lt;/h2&gt;

&lt;p&gt;In this post, I'll show you how you can easily generate your own overwrought and highly-sentimental commencement address clichés using a simple &lt;a href="https://github.com/jsvine/markovify"&gt;Markov chain library&lt;/a&gt; in Python and an open dataset of commencement speeches on FloydHub.&lt;/p&gt;

&lt;p&gt;In just a few minutes, you'll be spouting gems like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The secret to success in start-ups, or any other collaboration, is to stick an old head on a motorcycle weaving his way down the hall, he passed a door – it was empty.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Or perhaps this wisdom nugget:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Think of your ancestors: Among them, for everybody here, among your ancestors and I even thought about running away from the old ones.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Try it now
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.floydhub.com/run?template=https://github.com/whatrocks/markov-commencement-speech"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GJUv3LYq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/30sx4l1yo3n0eovcq9nr.png" alt="Button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click this button to open a &lt;a href="https://docs.floydhub.com/guides/workspace/"&gt;Workspace&lt;/a&gt; on FloydHub where you can train a Markov chain model to generate "commencement speech style" sentences in a live JupyterLab environment. The &lt;a href="https://floydhub.com/whatrocks/datasets/commencement"&gt;commencement address dataset of ~300 famous speeches&lt;/a&gt; will be automatically attached and available in the Workspace. Just follow along with the &lt;code&gt;speech_maker&lt;/code&gt; Jupyter notebook. It's that easy, folks.&lt;/p&gt;

&lt;h3&gt;
  
  
  But what's a Markov chain?
&lt;/h3&gt;

&lt;p&gt;A Markov chain, named after &lt;a href="https://en.wikipedia.org/wiki/Andrey_Markov"&gt;this bearded devil&lt;/a&gt;, is a model describing a sequence of states (these states which could be some situation, or set of values, or, in our case, a word in a sentence) along with the probability of moving from one state to any other state.&lt;/p&gt;

&lt;p&gt;The best explainer I've found for Markov chains is the &lt;a href="http://setosa.io/blog/2014/07/26/markov-chains/"&gt;visual explainer produced by Victor Powell and Lewis Lehe&lt;/a&gt;. Stop what you're doing right now and read their post.&lt;/p&gt;

&lt;p&gt;Good, you're back. &lt;/p&gt;

&lt;p&gt;As you just learned, Markov chains are popular modeling tools in a variety of industries where people need to model impossibly-large real world systems – finance, environmental science, computer science. Powell and Lehe point out that Google's big bad PageRank algorithm is a form of Markov chain. So they're, like, a big deal.&lt;/p&gt;

&lt;p&gt;But Markov chains have also found a nice sweet spot in the text generation realm of natural language processing (NLP). Or, in other words, they're perfect for creating Captain Picard Twitter bots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--caJSPEdn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/4a3q7g8chscl0m9rhtt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--caJSPEdn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/4a3q7g8chscl0m9rhtt2.png" alt="Picard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating the best speech ever
&lt;/h2&gt;

&lt;p&gt;In our case, we want to use a Markov chain to generate random sentences based on a corpus of famous commencement speeches. &lt;/p&gt;

&lt;p&gt;Luckily, there's a simple Python library for that first part. It's called &lt;a href="https://github.com/jsvine/markovify"&gt;markovify&lt;/a&gt;. I'll show you how to use it in just a second.&lt;/p&gt;

&lt;p&gt;First, we need to get some speech transcripts. Ah, data — the cause of and solution to all of deep learning's problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://apps.npr.org/commencement/"&gt;NPR's The Best Commencement Speeches, Ever&lt;/a&gt;, a site I frequent often, was a great starting point. From there, some &lt;a href="https://www.crummy.com/software/BeautifulSoup/"&gt;BeautifulSoup scraping&lt;/a&gt; – along with a shocking amount of manual spam removal from the transcripts found on popular commencement speech sites (ugh, don't ask) – led to the assemblage of a dataset containing ~300 plaintext commencement speech transcripts. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.floydhub.com/whatrocks/datasets/commencement"&gt;commencement speech dataset&lt;/a&gt; is publicly available on FloydHub so that you can use it your own projects. I've also put together a &lt;a href="https://whatrocks.github.io/commencement-db/"&gt;simple Gatsby.js static site if you'd like to casually read the speeches&lt;/a&gt; at your leisure.&lt;/p&gt;

&lt;p&gt;Okay, back to business. As mentioned, the markovify library is insanely easy to use. Let me remind you again to just click the "Run on FloydHub" button above to follow along in the &lt;code&gt;speech_maker&lt;/code&gt; notebook. Actually, here it is again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.floydhub.com/run?template=https://github.com/whatrocks/markov-commencement-speech"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GJUv3LYq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/30sx4l1yo3n0eovcq9nr.png" alt="Button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To generate our sentences, we're going to iterate through all the speeches in our dataset (available at the &lt;code&gt;/floyd/input/speeches&lt;/code&gt; path) and create a Markov model for each speech.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;markovify&lt;/span&gt;

&lt;span class="n"&gt;SPEECH_PATH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'/floyd/input/speeches/'&lt;/span&gt;

&lt;span class="n"&gt;speech_dict&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;speech_file&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;listdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SPEECH_PATH&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nb"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;SPEECH_PATH&lt;/span&gt;&lt;span class="si"&gt;}{&lt;/span&gt;&lt;span class="n"&gt;speech_file&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;speech&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;contents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;speech&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;# Create a Markov model for each speech in our dataset
&lt;/span&gt;        &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;markovify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;speech_dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;speech_file&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we'll use a markovify's &lt;code&gt;combine&lt;/code&gt; method to combine them into one large Markov chain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;speech_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;values&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="c1"&gt;# Combine the Markov models
&lt;/span&gt;&lt;span class="n"&gt;model_combination&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;markovify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;combine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we'll generate our random sentence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_combination&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;make_sentence&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Certainly I could do the most elegant and extraordinary products in federally funded highway projects.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You may have noticed that I've organized the individual speech models into a dictionary with the speech's filename as keys. Why did you do that, dummy? Well, if you keep following along with the &lt;code&gt;speech_maker&lt;/code&gt; notebook, you'll see that this helps you more easily filter the speeches as you keep experimenting.&lt;/p&gt;

&lt;p&gt;For example, maybe you only want to generate sentences from speeches delivered at Stanford University? Or at MIT? Or only from speeches delivered in the 1980s?&lt;/p&gt;

&lt;p&gt;The workspace contains a CSV with metadata for each speech (speaker &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;school&lt;/code&gt;, and &lt;code&gt;year&lt;/code&gt;). This is your chance to make a dent in the commencement speech universe. Oh, here's an idea: why don't you brush off your Simpson's TV script Recurrent Neural Network (RNN) code and give it a whirl on this dataset?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h0VNyweo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/a6e2fhj4re3decsdzmlv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h0VNyweo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/a6e2fhj4re3decsdzmlv.jpg" alt="simpsons"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add 'Run on FloydHub' to your own projects
&lt;/h2&gt;

&lt;p&gt;It can be a real pain in the you-know-what organizing your own machine learning experiments, let alone trying to share your work with others. Many folks wiser than I (or is it me?) have acknowledged the reproducibility crisis in data science.&lt;/p&gt;

&lt;p&gt;The "Run on FloydHub" button is here to help. With this button, we're making it just a little bit easier to reproduce and share your data science projects. &lt;/p&gt;

&lt;p&gt;Now, you can simply add this button to your repos on GitHub and anyone will be able to spin up a Workspace on FloydHub along with your code, datasets, deep learning framework, and any other environment configs.&lt;/p&gt;

&lt;p&gt;Here's what you need to do:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a floyd.yml config file in your repo&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gpu&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pytorch-1.4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add this snippet to your README&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"https://floydhub.com/run"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;img&lt;/span&gt; &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://static.floydhub.com/button/button.svg"&lt;/span&gt; &lt;span class="na"&gt;alt=&lt;/span&gt;&lt;span class="s"&gt;"Run"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;You're done!&lt;/strong&gt;&lt;br&gt;
Seriously. Try it out right now on with our &lt;a href="https://github.com/floydhub/image-classification-template"&gt;Object Classification&lt;/a&gt; repo. Or even this &lt;a href="https://github.com/whatrocks/markov-commencement-speech"&gt;post's repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The real magic is when you also include the required datasets in your config file.&lt;/em&gt; For example, the &lt;code&gt;floyd.yml&lt;/code&gt; config file for the Sentiment Analysis project looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tensorflow-1.7&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;floydhub/datasets/imdb-preprocessed/1&lt;/span&gt;
    &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;imdb&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will spin up your code in a CPU-powered Workspace using Tensorflow 1.7 as well as attach the &lt;a href="https://www.floydhub.com/floydhub/datasets/imdb-preprocessed/1"&gt;IMDB Dataset&lt;/a&gt;. This makes it insanely easy for anyone to reproduce your experiments, right from your GitHub README.&lt;/p&gt;

&lt;p&gt;You can &lt;a href="https://docs.floydhub.com/guides/run_on_floydhub_button/"&gt;read more&lt;/a&gt; about the Run on FloydHub button in our docs. We're looking forward to seeing what you share with the world! Good luck, graduates!&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Teaching my robot with TensorFlow</title>
      <dc:creator>Charlie Harrington</dc:creator>
      <pubDate>Fri, 21 Sep 2018 18:06:14 +0000</pubDate>
      <link>https://forem.com/whatrocks/teaching-my-robot-with-tensorflow-3i1a</link>
      <guid>https://forem.com/whatrocks/teaching-my-robot-with-tensorflow-3i1a</guid>
      <description>&lt;p&gt;If you're like me, then you'd do pretty much anything to have your own R2-D2 or BB-8 robotic buddy. Just imagine the adorable adventures you'd have together!&lt;/p&gt;

&lt;p&gt;I'm delighted to report that the &lt;a href="https://www.anki.com/en-us/cozmo"&gt;Anki Cozmo&lt;/a&gt; is the droid you've been looking for. &lt;/p&gt;

&lt;p&gt;Cozmo is big personality packed into a itty-bitty living space. You don't need to know how to code to play with Cozmo - &lt;em&gt;but if you do&lt;/em&gt; - then Cozmo has even more phenominal cosmic power.&lt;/p&gt;

&lt;p&gt;In this post, I'm going to show you how you can teach your own Cozmo to recognize everyday objects using transfer learning with TensorFlow on FloydHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;Install the &lt;a href="http://cozmosdk.anki.com/docs/"&gt;Cozmo Python SDK&lt;/a&gt;, create a new virtualenv, and clone the &lt;a href="https://www.github.com/whatrocks/cozmo-tensorflow"&gt;cozmo-tensorflow&lt;/a&gt; project to your local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;virtualenv ~/.env/cozmo &lt;span class="nt"&gt;-p&lt;/span&gt; python3
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.env/cozmo/bin/activate
git clone https://www.github.com/whatrocks/cozmo-tensorflow
&lt;span class="nb"&gt;cd &lt;/span&gt;cozmo-tensorflow
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;Next up - login to the FloydHub CLI (sign up for a &lt;a href="https://www.floydhub.com/plans"&gt;free account here&lt;/a&gt;). If you need to install the FloydHub CLI, just &lt;a href="https://docs.floydhub.com/guides/basics/install/"&gt;check out this guide&lt;/a&gt; in our &lt;a href="https://docs.floydhub.com/"&gt;documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;floyd login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  1. Use Cozmo to generate training data
&lt;/h2&gt;

&lt;p&gt;Getting enough training data for a deep learning project can be a pain. But thankfully we have a robot who loves to run around and take photos with his camera, so let's just ask Cozmo to take pictures of things we want our robot to learn. &lt;/p&gt;

&lt;p&gt;Let's start with a can of delicious overpriced seltzer. Place Cozmo directly in front of a can of seltzer. Make sure that your robot has enough space to rotate around the can while it is taking pictures. Be sure to enter the name of the object that Cozmo is photographing when you run the &lt;code&gt;cozmo-paparazzi&lt;/code&gt; script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 cozmo-paparazzi.py seltzer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MciKfK5w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hvn6reuu4pxc8xyx3rzn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MciKfK5w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hvn6reuu4pxc8xyx3rzn.gif" alt="CozmoPaparazzi"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repeat this step for as many objects (labels) as you want Cozmo to learn! You should now see all your image labels as subdirectories within the &lt;code&gt;/data&lt;/code&gt; folder of your local directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Uploading dataset to FloydHub
&lt;/h3&gt;

&lt;p&gt;Next up - let's upload our images to &lt;a href="https://www.floydhub.com/whatrocks/datasets/cozmo-images"&gt;FloydHub&lt;/a&gt; as a &lt;a href="https://docs.floydhub.com/guides/create_and_upload_dataset/"&gt;FloydHub Dataset&lt;/a&gt;. This will allow us to mount these images during our upcoming  model training and model serving jobs on FloydHub. Datasets on FloydHub are an easy way for your training jobs to reference a version-controlled dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;data
floyd data init cozmo-images
floyd data upload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our case, I've named this image dataset &lt;code&gt;cozmo-images&lt;/code&gt;. I've made it a &lt;a href="https://www.floydhub.com/whatrocks/datasets/cozmo-images"&gt;public dataset&lt;/a&gt;, so feel free to use it in your own Cozmo projects!&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Training our model on FloydHub
&lt;/h2&gt;

&lt;p&gt;And now the fun begins. First, Make sure you are in the project's root directory, and then initialize a FloydHub project so that we can train our model on one of FloydHub's fully-configured TensorFlow cloud GPU machines. &lt;/p&gt;

&lt;p&gt;Side note - if that last sentence sounded like a handful, then just know that FloydHub takes care of configuring and optimizing everything on your cloud machine so that it's ready for your GPU-powered deep learning experiments. You can specify the exact deep learning framework you'd like to use - whether that's TensorFlow 1.4 or PyTorch 0.3 or &lt;a href="https://docs.floydhub.com/guides/environments/"&gt;more&lt;/a&gt; - and FloydHub will make sure your machine has everything you need to start training immediately.&lt;/p&gt;

&lt;p&gt;Okay, back to business, let's initialize our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;floyd init cozmo-tensorflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we're ready to kick off a deep learning training job on FloydHub. &lt;/p&gt;

&lt;p&gt;A few things to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We'll be doing some simple transfer learning with the &lt;a href="https://github.com/tensorflow/models/tree/master/research/inception"&gt;Inception v3 model&lt;/a&gt; provided by Google. Instead of training a model from scratch, we can start with this pre-trained model, and then just swap out its final layer so that we can teach it to recognize the objects we want Cozmo to learn. Transfer learning is a very useful technique, and you can read more about it on &lt;a href="https://www.tensorflow.org/tutorials/image_retraining"&gt;TensorFlow's website&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;We're going to be mounting the images dataset that Cozmo created with the &lt;code&gt;--data&lt;/code&gt; flag at the &lt;code&gt;/data&lt;/code&gt; directory on our FloydHub machine.&lt;/li&gt;
&lt;li&gt;I'm enabling Tensorboard for this job with the &lt;code&gt;--tensorboard&lt;/code&gt; flag so that I can visually monitor my job's training process&lt;/li&gt;
&lt;li&gt;I've edited the &lt;code&gt;retrain.py&lt;/code&gt; script (&lt;a href="https://github.com/googlecodelabs/tensorflow-for-poets-2"&gt;initially provided by the TensorFlow team&lt;/a&gt;) to write its output to the &lt;code&gt;/output&lt;/code&gt; directory. This is super important when you're using FloydHub, because FloydHub jobs always store their outputs in the &lt;code&gt;/output&lt;/code&gt; directory). In our case, we'll be saving our retrained ImageNet model and its associated training labels to the job's &lt;code&gt;/output&lt;/code&gt; folder.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;floyd run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--gpu&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; whatrocks/datasets/cozmo-images:data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--tensorboard&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s1"&gt;'python retrain.py --image_dir /data'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;That's it! There's no need to configure anything on AWS or install TensorFlow or deal with GPU drivers or anything like that. (If you're paying close attention, I didn't include the &lt;code&gt;--env&lt;/code&gt; flag in my job command - that's because FloydHub's default environment includes TensorFlow 1.1.0 and Keras 2.0.6, and that's all I need for my training 😎). &lt;/p&gt;

&lt;p&gt;Once your job is complete, you'll be able to see your newly retrained model in &lt;a href="https://www.floydhub.com/whatrocks/projects/cozmo-tensorflow/8/output"&gt;your job's output directory&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I recommend converting your job's output into a standalone FloydHub Dataset to make it easier for you to mount our retrained model in future jobs (which we're going to be doing in the next step). You can do this by clicking the 'Create Dataset' button on the job's output page. Check out the Dataset called &lt;a href="https://www.floydhub.com/whatrocks/datasets/cozmo-imagenet"&gt;cozmo-imagenet&lt;/a&gt; to see my retrained model and labels.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Connecting Cozmo to our retrained model
&lt;/h2&gt;

&lt;p&gt;We can test our newly retrained model by running another job on FloydHub that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mounts our &lt;a href="https://www.floydhub.com/whatrocks/datasets/cozmo-imagenet"&gt;retrained model and labels&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sets up a public REST endpoint for model serving&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.floydhub.com/guides/run_a_job/#-mode-serve"&gt;Model-serving&lt;/a&gt; is an experimental feature on FloydHub - we'd love to hear your &lt;a href="https://www.twitter.com/floydhub_"&gt;feedback on Twitter&lt;/a&gt;! In order for this feature to work, you'll need to include a simple Flask app called &lt;code&gt;app.py&lt;/code&gt; in your project's code.&lt;/p&gt;

&lt;p&gt;For our current project, I've created a simple Flask app that will receive an image from Cozmo in a POST request, evaluate it using the model we trained in our last step, and then respond with the model's results. Cozmo can then use the results to determine whether or not it's looking at a specific object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;floyd run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; whatrocks/datasets/cozmo-imagenet:model &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--mode&lt;/span&gt; serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;Finally, let's run our &lt;code&gt;cozmo-detective.py&lt;/code&gt; script to ask Cozmo to move around the office to find a specific object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 cozmo-detective.py toothpaste
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;Every time that Cozmo moves, the robot will send an black and white image of whatever it's seeing to the model endpoint on FloydHub - and FloydHub will run the model against this image, returning the following payload with "Cozmo's guesses" and how long it took to compute the guesses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;answer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; 
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;plant&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.022327899932861328&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;seltzer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.9057837128639221&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;toothpaste&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.07188836485147476&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; 
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;seconds&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.947&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;If Cozmo is at least 80% confident that it is looking at the object in question, then the robot will run towards it victoriously! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MmSQIk97--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vuxx3xomgncw95jthdte.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MmSQIk97--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vuxx3xomgncw95jthdte.gif" alt="finder"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once Cozmo's found all your missing objects, don't forget to shut down your serving job on FloydHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  A new hope
&lt;/h2&gt;



&lt;p&gt;&lt;em&gt;It's a magical world, Cozmo, ol' buddy... let's go exploring!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I'm eager to see what you and your Cozmo can find together, along with a little help from your friends at FloydHub. Share your discoveries with us on &lt;a href="https://twitter.com/floydhub_"&gt;Twitter&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;This project is an extension of &lt;a class="mentioned-user" href="https://dev.to/nheidloff"&gt;@nheidloff&lt;/a&gt;
's &lt;a href="https://github.com/nheidloff/visual-recognition-for-cozmo-with-tensorflow"&gt;Cozmo visual recognition project&lt;/a&gt; and the &lt;a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0"&gt;Google Code Labs TensorFlow for Poets project&lt;/a&gt;. I also wrote about this project on my &lt;a href="https://www.charlieharrington.com/teaching-my-robot-with-tensorflow"&gt;personal site&lt;/a&gt; - except with a lot more references to Short Circuit and The Legend of Zelda.&lt;/p&gt;

</description>
      <category>python</category>
      <category>tensorflow</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
    </item>
  </channel>
</rss>
