<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Surya Dantuluri</title>
    <description>The latest articles on Forem by Surya Dantuluri (@sdan).</description>
    <link>https://forem.com/sdan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sdan"/>
    <language>en</language>
    <item>
      <title>Building a scalable, highly available, and portable web server
</title>
      <dc:creator>Surya Dantuluri</dc:creator>
      <pubDate>Mon, 07 Oct 2019 17:56:37 +0000</pubDate>
      <link>https://forem.com/sdan/building-a-scalable-highly-available-and-portable-web-server-5h1c</link>
      <guid>https://forem.com/sdan/building-a-scalable-highly-available-and-portable-web-server-5h1c</guid>
      <description>&lt;h2&gt;
  
  
  After over a year of development, SD2, my web server is done.
&lt;/h2&gt;

&lt;p&gt;When I initially set out to build a website, I wanted it ranked #1 when you searched my name on Google. I remember seeing other students like Kevin Frans(MIT/OpenAI) and Gautam Mittal(Berkeley/Stripe) having their websites highly ranked on Google and took inspiration from that to do the same. I also wanted to automate a lot of mundane tasks to ensure that this web server is &lt;strong&gt;highly scalable&lt;/strong&gt; in the event I get a lot of traffic, &lt;strong&gt;highly available&lt;/strong&gt; to prevent any user from having downtime while accessing the many elements to my server, and &lt;strong&gt;portable&lt;/strong&gt; if I need to move over to a different cloud service provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Switching between AWS, GCP, and DigitalOcean
&lt;/h2&gt;

&lt;p&gt;SD2 is the new web server I've been building for nearly a year now. After going on DigitalOcean then to Google Cloud for a bit, then AWS for a longer bit, then finally back to GCP, it's been a long ride. DigitalOcean was too expensive and my Google Cloud discounts expired (which I renewed, so I went back).&lt;/p&gt;

&lt;p&gt;I initially started with DigitalOcean since they had relatively low prices and a hackathon I've attended incentivized it using prizes, so I decided to give it a try. After nearly a year, I decided DigitalOcean was a little bit like training wheels and I needed more functionality among other things, so I moved over to GCP.&lt;/p&gt;

&lt;p&gt;From GCP, I was able to configure a nice setup with my site, but the problem was in reverse proxying my Blog (runs on Ghost) and reverse proxying other static sites. For this hassle, I temporarily weaved in and out of using simple Github Pages (although in hindsight, this was mostly my fault)&lt;/p&gt;

&lt;p&gt;After running some &lt;a href="https://blog.suryad.com/hyperparameter/"&gt;RL models&lt;/a&gt; and subsequently burning through hundreds of dollars an hour, I ran quickly out of credits and was nearly forced to move over to AWS.&lt;/p&gt;

&lt;p&gt;After finding out about AWS' free tier I decided to give EC2 a chance. Using it for a couple days got me comfortable using it, although configuring the launch wizards opposed to VPC on GCP took some to get used to, but I was able to run basic NGINX instances off of it.&lt;/p&gt;

&lt;p&gt;However, AWS has this somewhat weird structure for CPU usage available here: &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html#cpu-credits"&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html#cpu-credits&lt;/a&gt;. Essentially, after several days of running my entire stack, I was beginning to be throttled all the way down to nearly 5% CPU utilization, slowing all my sites to a standstill. It got so bad that any and all my sites were unusable.&lt;/p&gt;

&lt;p&gt;This prompted me to get some more credits from GCP and continue on there for the finite future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: When developing this server, I used AWS.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I needed from a web server
&lt;/h2&gt;

&lt;p&gt;Before going further, I had to outline what I wanted to do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;A NGINX server to reverse proxy websites I build&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;High availability of my Blog (running on Ghost)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Self hosted Analytics Engine&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Self hosted storage service (both uploading and downloading)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Self hosted email server (SMTP)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Self hosted URL shortener&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And all of this had to be built using Docker, to make it easily shippable to other servers in the case the one I'm using goes down or something else happens (from experience on moving from DO to GCP to AWS to GCP and in between spinning down and up servers from instance to instance)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QqM2sekE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://diginomica.com/sites/default/files/styles/article_images_desktop_2x/public/images/2017-09/docker-container.jpg%3Fitok%3Dgh3M3vAa" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QqM2sekE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://diginomica.com/sites/default/files/styles/article_images_desktop_2x/public/images/2017-09/docker-container.jpg%3Fitok%3Dgh3M3vAa" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While I was still developing this server, I was still on AWS. Here's what I did:&lt;/p&gt;

&lt;p&gt;I initially created a AWS AMI with Docker baked in (because I couldn't find one readily available) and started playing with my newfound Docker knowledge (from using it in the workplace).&lt;/p&gt;

&lt;p&gt;With this in place I started spinning up EC2 instances left and right, trying out Ghost images, setting up ECS, finding out what Fargate is, and trying to see how I can scale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qa0rNGaU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.conceptdraw.com/How-To-Guide/picture/Design%2520Elements%2520-%2520AWS%2520-%2520Amazon%2520Web%2520Services%2520architecture%2520solution-2_0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qa0rNGaU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.conceptdraw.com/How-To-Guide/picture/Design%2520Elements%2520-%2520AWS%2520-%2520Amazon%2520Web%2520Services%2520architecture%2520solution-2_0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After several weeks I realized I didn't need ECS or Fargate or any of the "fancy" features AWS was offering, since I wasn't building a SaaS startup or anything, just a simple personal web server that fit my needs.&lt;/p&gt;

&lt;p&gt;So I continued playing around with Docker and EC2.&lt;/p&gt;

&lt;p&gt;But as I was setting up all these Docker containers, I was realizing there had to be some way of composing all of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PeZjiSJe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/906/1%2AQVFjsW8gyIXeCUJucmK4XA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PeZjiSJe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/906/1%2AQVFjsW8gyIXeCUJucmK4XA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's when I found docker compose. It made it &lt;strong&gt;a lot&lt;/strong&gt; easier to just spin up new containers if I needed to. And scalability? It had it, without even disrupting existing containers.&lt;/p&gt;

&lt;p&gt;Running a simple&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up --scale ghost=3 -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L1bRS5LO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/scaleup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L1bRS5LO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/scaleup.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Seamlessly scales up.&lt;/p&gt;

&lt;p&gt;But how are we going to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manage all these ports?&lt;/li&gt;
&lt;li&gt;Reverse proxy everything without much hassle?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Initially I heard that &lt;a href="https://github.com/jwilder/nginx-proxy"&gt;Nginx Proxy&lt;/a&gt; by Jason Wilder (Azure) was a great option.&lt;/p&gt;

&lt;p&gt;After doing a little digging, it definitely sounded like a nice option. And using it for a little while made me realize it was &lt;strong&gt;manually generating configuration files&lt;/strong&gt; and it was hard to easily configure, especially with Docker compose.&lt;/p&gt;

&lt;p&gt;That's when I found Traefik, a cloud native edge router.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aHLw_gIq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/containous/traefik/raw/master/docs/content/assets/img/traefik.logo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aHLw_gIq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/containous/traefik/raw/master/docs/content/assets/img/traefik.logo.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how they describe it from their README:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, ...) and configures itself automatically and dynamically. Pointing Traefik at your orchestrator should be the only configuration step you need.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So it was a batteries included reverse proxy engine which fully supported Docker-compose, fitting exactly my needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IRw_waML--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/containous/traefik/raw/master/docs/content/assets/img/traefik-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IRw_waML--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/containous/traefik/raw/master/docs/content/assets/img/traefik-architecture.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simply: It took whatever port that was being publically exposed by the container and proxying it to whatever domain/subdomain needed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was crucial to me, since either managing a bunch of services running on the machine itself or a bunch of docker containers spun up from a Docker Compose file was going to be a mess to handle.&lt;/p&gt;

&lt;p&gt;The better part is that given that I'm running some databases and other sensitive info, I could use an internal subnet to prevent that from being publicly facing and Traefik inadvertently helped with that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Traefik and the configuration of my Docker Compose file, I was able to essentially use every Docker image like an app. Easily being able to download, update, and uninstall without disturbance to any other services&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I could install Netdata(gives statistics on how your site is doing) within minutes; even setup my own Overleaf/Sharelatex directly on my website, just with some small configuration changes with Traefik.&lt;/p&gt;

&lt;p&gt;Traditionally I'd have to spend hours downloading, installing, and maintaining Redis, Mongo, MySQL, etc. databases on top of downloading, installing, and mainting the services themselves. If anything went wrong, I'd have to completely restart. On top of that, the databases would be public facing (last time FB had a MongoDB instance public facing, millions of username/passwords were leaked) and that's the last thing I want.&lt;/p&gt;

&lt;p&gt;On the other-hand, with Traefik and Docker Compose, I'm easily able to run everything and anything I want within minutes with Docker and route every service to its intended destination with Traefik.&lt;/p&gt;

&lt;p&gt;As I've said previously; given Docker's "portability" of shipping containers around, I could even set everything up on my old desktop and have it work seamlessly (with using Cloudflare Argo).&lt;/p&gt;

&lt;p&gt;Traefik even has a GUI! Here's how &lt;a href="http://monitor.suryad.com"&gt;monitor.suryad.com&lt;/a&gt; looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zQZ2Yfay--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/monitor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zQZ2Yfay--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/monitor.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;So what am I running?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With the power of Traefik and Docker Compose I'm running the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This Blog (subscribe if you haven't yet! &lt;a href="https://sdan.xyz/subscribe"&gt;sdan.xyz/subscribe&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://suryad.com/"&gt;suryad.com&lt;/a&gt;, &lt;a href="https://sd2.suryad.com/"&gt;sd2.suryad.com&lt;/a&gt; (static sites)&lt;/li&gt;
&lt;li&gt;ShareLatex (I run my own Latex server for fun)&lt;/li&gt;
&lt;li&gt;Mongo, Redis, Postgres, MySQL (although I'm probably going to drop MySQL for SQlite soon)&lt;/li&gt;
&lt;li&gt;URL Shortener (&lt;a href="https://sdan.xyz/"&gt;sdan.xyz&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;A commenting server (for you to comment on my posts; don't like Disqus having you to login everytime)&lt;/li&gt;
&lt;li&gt;Netdata to check if everything is running or if swap space RAM is being used or to see if CPU is being over-utilized&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So have I accomplished what I sought out for?&lt;/p&gt;

&lt;p&gt;Somewhat.&lt;/p&gt;

&lt;p&gt;I'm still working on an analytics engine after both Fathom and Ackee (open sourced analytics engines) gave me mixed results, but gave me the direction that I should self host (I don't trust that Google Analytics is going to give me the correct results since so many people have adblock on).&lt;/p&gt;

&lt;p&gt;In regards to a "Google Drive" for myself, it's at: &lt;a href="https://sdan.cc/"&gt;sdan.cc&lt;/a&gt; (where all the images you're seeing are from) which I host from my Fastmail account (allows me to upload/download/serve files with ease)&lt;/p&gt;

&lt;p&gt;In regards to email: I need a reliable email service. Although I trust myself, I trust Fastmail to do it for me with better features. I've come to love what features they provide and so far just stuck with them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At the end of the day I'm more than excited and happy with what I've built with the knowledge I've accrued from my internship at Renovo and from other various sources.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  So what can I as a reader use from your "plethora" of services?
&lt;/h3&gt;

&lt;p&gt;Not much unfortunately. This whole setup was more of a feat to show how powerful, simple, and easy self-hosting can be. Although one day I'd like people to use my URL shortener or Latex server, at this point my low spec'd GCP instance probably isn't good enough for multiple people to join on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;However,&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can build everything listed here on your own.&lt;/p&gt;

&lt;p&gt;Simply install &lt;a href="https://github.com/dantuluri/setup"&gt;https://github.com/dantuluri/setup&lt;/a&gt; (which coats you server with some butter to start cooking up the main part of the operation)&lt;/p&gt;

&lt;p&gt;Then simply download &lt;a href="https://github.com/dantuluri/sd2"&gt;https://github.com/dantuluri/sd2&lt;/a&gt; and run docker-compose on this file and start traefik with this line:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker run -d -v /var/run/docker.sock:/var/run/docker.sock -v $PWD/traefik.toml:/traefik.toml -p 80:80  -l traefik.frontend.rule=Host:&lt;em&gt;subdomain&lt;/em&gt;.&lt;em&gt;yourdomain&lt;/em&gt;.com -l traefik.port=8080 --network proxy --name traefik traefik:1.7.12-alpine --docker&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;So suppose you have some free time and hop onto any one of my websites/services. How would you get there?&lt;/p&gt;

&lt;p&gt;Here's a somewhat simple and packed diagram explaining that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---JHCLk0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/sd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---JHCLk0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/sd2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's also some fun diagrams generated by various scripts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gUdykOSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/dockerdep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gUdykOSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/dockerdep.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rbuE7286--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/dockertree.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rbuE7286--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sdan.cc/assets/images/dockertree.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>traefik</category>
      <category>nginx</category>
      <category>cloudflare</category>
    </item>
    <item>
      <title>DAgger Explained</title>
      <dc:creator>Surya Dantuluri</dc:creator>
      <pubDate>Mon, 01 Apr 2019 07:00:00 +0000</pubDate>
      <link>https://forem.com/sdan/dagger-explained-2np9</link>
      <guid>https://forem.com/sdan/dagger-explained-2np9</guid>
      <description>

&lt;h1&gt;
  
  
  Note: LaTex does &lt;em&gt;not&lt;/em&gt; render properly on dev.to. You can find the LaTex rich version of this post here: &lt;a href="https://blog.suryad.com/dagger/"&gt;https://blog.suryad.com/dagger/&lt;/a&gt;
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Note: All material from this article is adapted from Sergey Levine's CS294-112 2017/2018 class
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EzXV0t-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/dagger-face-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EzXV0t-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/dagger-face-1.png" alt="DAgger Explained" width="576" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dataset Aggregation, more commonly referred to as DAgger is a relatively simple iterative algorithm that trains a deep deterministic policy solely dependent on the distribution of states of the original and generated dataset. [1]&lt;/p&gt;

&lt;h2&gt;
  
  
  Here's a simple example:
&lt;/h2&gt;

&lt;p&gt;Let's say you're teaching someone to drive. In this instance, you are a human (&lt;a href="https://www.youtube.com/watch?v=pYslwSw8IMo"&gt;hopefully&lt;/a&gt;) and your friend's name is Dave. Despite being a fellow human, Dave's incapability to drive renders him to be like a policy (\pi_{\theta}(a_t \vert o_t)). Where (\pi_{\theta}) is the parameterized policy and (a_t) is the sampled action space from (o_t), which is the observation space.&lt;/p&gt;

&lt;p&gt;But Dave is smart, beforehand he trained for this exercise by watching some YouTube videos, which we can represent as (p_{data}(o_t)). Like the rest of us, Dave isn't perfect and he makes mistakes. In this scenario, however, Dave's mistakes can quickly add up and result in him (and consequently you) to end up in a ditch or crashing into someone else. We can represent Dave's distribution for observation as (p_{\pi_{\theta}}(o_t)). As Dave drives, his mistakes build up, eventually leading him to diverge from (p_{data}(o_t)), which is the data that Dave trained on (those YouTube videos).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---cshuO-u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_schulman.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---cshuO-u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_schulman.png" alt="DAgger Explained" width="880" height="478"&gt;&lt;/a&gt;From John Schulman at Berkeley Deep RL 2015. Link: &lt;a href="http://rll.berkeley.edu/deeprlcourse-fa15/docs/2015.10.5.dagger.pdf"&gt;http://rll.berkeley.edu/deeprlcourse-fa15/docs/2015.10.5.dagger.pdf&lt;/a&gt; [3]&lt;/p&gt;

&lt;p&gt;Think of Dave as the red trajectory in the picture. You know that the blue trajectory is correct in your mind and overtime you try to tell Dave to drive like the blue policy.&lt;/p&gt;

&lt;p&gt;So how can we help poor Dave? After Dave watches those YouTube videos initially to get some sense of how to drive, he records himself driving. After each episode of driving, we aggregate this data and add in what actions Dave should've done to get a new dataset (\mathcal{D_{\pi}}) which consists of not only ({o_1,\ldots,o_M}) but also the actions corresponding to those observations (since we filled in what actions he should've taken for those observations): ({o_1,a_1,\ldots,a_M,o_M})&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v7U5rhAH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_trajectory.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v7U5rhAH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_trajectory.png" alt="DAgger Explained" width="589" height="317"&gt;&lt;/a&gt;Dave's trajectory is red while you know that the black trajectory is optimal. Dave creates a dataset from the red trajectory and you are tasked with filling in the actions for each observation Dave did. From CS294-112 8/28/17 by Sergey Levine. [4]&lt;/p&gt;

&lt;p&gt;Now we aggregate the old dataset, (\mathcal{D}) with this "new" dataset, (\mathcal{D_{\pi}}) like so: (\mathcal{D} \Leftarrow \mathcal{D} \cup\mathcal{D_{\pi}})&lt;/p&gt;

&lt;p&gt;Surely, you can aggregate the data in whatever way you want, but this is a simple way of doing it.&lt;/p&gt;




&lt;p&gt;In that example, we essentially implemented DAgger on Dave! Overtime Dave's distributional mismatch will be nonexistent, meaning Dave will be a driving legend.&lt;/p&gt;

&lt;p&gt;Let's translate the example above to an algorithm:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1iFBo_jU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_algorithm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1iFBo_jU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_algorithm.png" alt="DAgger Explained" width="841" height="375"&gt;&lt;/a&gt;Rendered by Surya Dantuluri, adapted by Ross, Gordon, Bagnell, Levine [1], [4]&lt;/p&gt;

&lt;p&gt;Let's go through this algorithm step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We initialize the dataset (\mathcal{D}) with initial dataset.
&lt;/li&gt;
&lt;li&gt;Initialize some policy (\pi_{\theta}(a_t \vert o_t))
&lt;/li&gt;
&lt;li&gt;For N steps (Which is determined by how many times you want to iterate this algorithm. The more times you iterate this algorithm, the better (p_{\pi_{\theta}}(o_t)) will be like (p_{data}(o_t))
&lt;/li&gt;
&lt;li&gt;Now inside the for-loop, we sample a trajectory from policy (\pi_{\theta}(a_t \vert o_t))
&lt;/li&gt;
&lt;li&gt;We get the distribution of observations from (p_{\pi_{\theta}}(o_t)) which is based on expert dataset, (p_{data}(o_t))
&lt;/li&gt;
&lt;li&gt;Once we have the distribution of observations, we add in what actions the policy (\pi_{\theta}(a_t \vert o_t)) should've taken
&lt;/li&gt;
&lt;li&gt;We then aggregate the new dataset we have just created with the initial dataset
&lt;/li&gt;
&lt;li&gt;Train classifier (\pi_{\theta}(a_t \vert o_t)) on this big dataset (\mathcal{D})
&lt;/li&gt;
&lt;li&gt;Repeat this for loop as long as you want, since (\pi_{\theta}(a_t \vert o_t)) gets better overtime and asymptotically its (p_{\pi_{\theta}}(o_t)) will be like (p_{data}(o_t))&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;So how can we implement DAgger in practice?&lt;/p&gt;

&lt;p&gt;Well the algorithm is simple enough, so we'll just need to translate the psuedo-code into Python and for now we won't go over setup/etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This implementation does not require a human.&lt;/strong&gt; Since DAgger is a Behavior Cloning (BC) algorithm, instead of cloning the behavior of a human, we can just run DAgger against an expert to eventually clone the performance of this expert.&lt;/p&gt;

&lt;p&gt;This implementation is based off &lt;a href="https://github.com/jj-zhu/jadagger"&gt;https://github.com/jj-zhu/jadagger&lt;/a&gt;. [5]&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
    with tf.train.MonitoredSession() as sess:&lt;br&gt;
        sess.run(tf.global_variables_initializer())&lt;br&gt;
        # record return and std for plotting&lt;br&gt;
        save_mean = []&lt;br&gt;
        save_std = []&lt;br&gt;
        save_train_size = []&lt;br&gt;
        # loop for dagger alg&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Here we have the initialization for the Tensorflow session we're starting. Note that I changed the normal tf.Session() to tf.train.MonitoredSession() because it comes with some benefits. Other than that we initialize some arrays we are going to use in the algorithm.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
        # loop for dagger alg&lt;br&gt;
        for i_dagger in xrange(50):&lt;br&gt;
            print 'DAgger iteration ', i_dagger&lt;br&gt;
            # train a policy by fitting the MLP&lt;br&gt;
            batch_size = 25&lt;br&gt;
            for step in range(10000):&lt;br&gt;
                batch_i = np.random.randint(0, obs_data.shape[0], size=batch_size)&lt;br&gt;
                train_step.run(feed_dict={x: obs_data[batch_i,], yhot: act_data[batch_i,]})&lt;br&gt;
                if (step % 1000 == 0):&lt;br&gt;
                    print 'opmization step ', step&lt;br&gt;
                    print 'obj value is ', loss_l2.eval(feed_dict={x:obs_data, yhot:act_data})&lt;br&gt;
            print 'Optimization Finished!'&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Here we are setting our DAgger algorithm to iterate 50 times. Within the for-loop we start by training our policy (\pi_{\theta}(a_t \vert o_t)) to fit the MLP, which stands for Multilayer Perceptron. It's essentially a standard feedforward NN with an input, hidden, and output layer (very simple).&lt;/p&gt;

&lt;p&gt;To put this step in other words, given this is the 1st iteration, Dave from the earlier example has just gotten some idea on how to drive by watching those YouTube videos. In later iterations, Dave, (\pi_{\theta}(a_t \vert o_t)) is essentially training on the entire dataset (\mathcal{D}). This is out-of-order, but this is step 6 in our algorithm. It doesn't matter though since all parts of the algorithm are incorporated in this implementation, &lt;strong&gt;in order&lt;/strong&gt;.&lt;/p&gt;



&lt;h1&gt;
  
  
  use trained MLP to perform
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        max_steps = env.spec.timestep_limit

        returns = []
        observations = []
        actions = []
        for i in range(num_rollouts):
            print('iter', i)
            obs = env.reset()
            done = False
            totalr = 0.
            steps = 0
            while not done:
                action = yhat.eval(feed_dict={x:obs[None, :]})
                observations.append(obs)
                actions.append(action)
                obs, r, done, _ = env.step(action)
                totalr += r
                steps += 1   
                if render:
                    env.render()
                if steps % 100 == 0: print("%i/%i" % (steps, max_steps))
                if steps &amp;amp;gt;= max_steps:
                    break
            returns.append(totalr)
        print('mean return', np.mean(returns))
        print('std of return', np.std(returns))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we are rolling out our policy Dave (in other words (\pi_{\theta}(a_t \vert o_t))). What does policy rollout mean?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s-0wuZBK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_rolly.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s-0wuZBK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://data.suryad.com/assets/blog/dagger_rolly.jpg" alt="DAgger Explained" width="400" height="267"&gt;&lt;/a&gt;&lt;a href="https://www.uihere.com/free-cliparts/search?q=Roly-poly&amp;amp;page=2"&gt;https://www.uihere.com/free-cliparts/search?q=Roly-poly&amp;amp;amp;page=2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Does it have something to do with a roly poly rolling out? As you could guess, it doesn't. Rather, it's essentially our policy exploring trajectories, eventually building up a distribution of (p_{data}(o_t)). This is steps 2 and 3 in our algorithm.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
 # expert labeling&lt;br&gt;
            act_new = []&lt;br&gt;
            for i_label in xrange(len(observations)):&lt;br&gt;
                act_new.append(policy_fn(observations[i_label][None, :]))&lt;br&gt;
            # record training size&lt;br&gt;
            train_size = obs_data.shape[0]&lt;br&gt;
            # data aggregation&lt;br&gt;
            obs_data = np.concatenate((obs_data, np.array(observations)), axis=0)&lt;br&gt;
            act_data = np.concatenate((act_data, np.squeeze(np.array(act_new))), axis=0)&lt;br&gt;
            # record mean return &amp;amp; std&lt;br&gt;
            save_mean = np.append(save_mean, np.mean(returns))&lt;br&gt;
            save_std = np.append(save_std, np.std(returns))&lt;br&gt;
            save_train_size = np.append(save_train_size, train_size)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dagger_results = {'means': save_mean, 'stds': save_std, 'train_size': save_train_size,
                  'expert_mean':save_expert_mean, 'expert_std':save_expert_std}
print 'DAgger iterations finished!'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have the last process necessary for DAgger here. Here we expertly label the distribution of (p_{data}(o_t)) with our expert policy. This is steps 4 and 5 in our algorithm.&lt;/p&gt;

&lt;p&gt;Wasn't that fun? Hopefully you get a better idea of what DAgger is. You can find the code for this by Jia-Jie Zhu on his &lt;a href="https://github.com/jj-zhu/jadagger"&gt;repo&lt;/a&gt;. If you have any problems, feel free to contact me at &lt;a href="https://blog.suryad.com/cdn-cgi/l/email-protection"&gt;[email protected]&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[1] A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning, Ross, Gordon &amp;amp; Bagnell (2010). DAGGER algorithm [&lt;a href="https://arxiv.org/abs/1011.0686"&gt;link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;[2] Deep DAgger Imitation Learning for Indoor Scene Navigation [&lt;a href="http://cs231n.stanford.edu/reports/2017/pdfs/614.pdf"&gt;PDF&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;[3] DAGGER and Friends [&lt;a href="http://rll.berkeley.edu/deeprlcourse-fa15/docs/2015.10.5.dagger.pdf"&gt;PDF&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Schulman, J., 2015.&lt;/p&gt;

&lt;p&gt;[4] CS294-112 Fa18 8/24/18 [&lt;a href="https://www.youtube.com/watch?v=yPMkX_6-ESE&amp;amp;index=24&amp;amp;list=PLkFD6_40KJIxJMR-j5A1mkxK26gh_qg37"&gt;link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Levine, S., 2018.&lt;/p&gt;

&lt;p&gt;[5] jadagger [&lt;a href="https://github.com/jj-zhu/jadagger"&gt;link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Zhu, J.J., 2016.&lt;/p&gt;

</description>
      <category>dagger</category>
      <category>datasetaggregation</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building a Self Driving Car using Machine Learning</title>
      <dc:creator>Surya Dantuluri</dc:creator>
      <pubDate>Wed, 15 Aug 2018 07:00:00 +0000</pubDate>
      <link>https://forem.com/sdan/building-a-self-driving-car-using-machine-learning-90m</link>
      <guid>https://forem.com/sdan/building-a-self-driving-car-using-machine-learning-90m</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AMSiqmDX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_k4i_wXZQby3xFUPY.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AMSiqmDX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_k4i_wXZQby3xFUPY.jpeg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a story about how a high schooler beat Georgia Tech's robotics team single handedly, not a technical deep dive. For that, here is the paper corresponding with this project &lt;a href="https://arxiv.org/abs/1807.08233"&gt;https://arxiv.org/abs/1807.08233&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;None of this would have happened if ECX had made high-quality products.&lt;/p&gt;

&lt;p&gt;Seriously.&lt;/p&gt;

&lt;p&gt;It was the summer of 2013 where my mom, sisters and I went to visit a family friend in Fremont, CA. I just got a RC Car off Amazon with money I made at a small Closet Sale (my version of a garage sale, since I lived in an apartment complex). I decided to bring my brand-new RC Car to this family friend’s house since they had a road in front that led to a dead end (which meant practically no traffic). I drove it around the street for a good 15 minutes before handing along to my sisters and the other family friends. They all played around with it for a while and it was fun to watch this small car drive around at speeds close to 15 mph. Then my mom called all of us to come in for a snack. While everyone else left, I decided to play around with my car for a couple more minutes before going in. After driving it down the street and back at around 10–15 mph, the car didn’t come to a complete stop when I rotated the control knob to brake. Instead, the car came to a slow stop, almost like it was on neutral mode, which I didn’t think was possible. I was right. The motor was moving, but the car wasn’t anymore.&lt;/p&gt;

&lt;p&gt;I spent the next hour relentlessly debugging the issue. I tried resetting the receiver, resetting the controller, even trying slowly turning the throttle knob to see if it would move, even slightly. The servo worked for sure, but the car wasn’t moving, even if the motor was running.&lt;/p&gt;

&lt;p&gt;It took me weeks to finally get in contact with a Horizon Hobby representative to help me debug the issue. I found out 30 minutes later that something foreign to me at the time; the pinion gear, was broken.&lt;/p&gt;

&lt;p&gt;That’s what started a 5-year career in building and modifying RC cars during the summertime starting in my 5th-grade summer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LY-C956g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_k4i_wXZQby3xFUPY--1-.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LY-C956g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_k4i_wXZQby3xFUPY--1-.jpeg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;ETG is Rolling Around&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Robotics in High School&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;From ending a successful middle school career, I was pumped to continue my development at Monta Vista High School. I was President of the Science Olympiads club and the National Junior Honor Society at my middle school, two of the biggest organizations there. For various reasons, I moved near Monta Vista, one of the most reputable schools in Cupertino/Sunnyvale. I heard about how hard it was, but never did I think it was as hard as it was during Freshman and Sophomore year for me and many others. Anyway, as a pumped middle schooler, I joined the Robotics team and several other clubs right away, hoping for officer positions. Unfortunately, I didn’t get any and surprisingly, not even during Sophomore year (I’ll explain in another blog post). I also joined Cross Country because I was in Cross Country in 8th grade. Being on the Robotics team and in Cross Country wasn’t the easiest, but it was manageable for me.&lt;/p&gt;

&lt;p&gt;Over time, I realized that the team culture our robotics team had was really toxic. This matched news I’ve heard about lawsuits going against this team regarding its culture in previous years. As of writing this blog post, I can’t and don’t want to explain how bad it was for everyone on the robotics team, but it was bad enough that I had to leave.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FAehkp_D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_V9mssslJcGJlomaK-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FAehkp_D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_V9mssslJcGJlomaK-1.jpg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although a lot of friends I had made did still stick with the team, what ultimately make me decide to leave was that I was realizing FIRST was more of a mechanical and hardware-based competition. I wasn’t really into the mechanical or hardware side of robotics since most of the time I didn’t have access to those systems to learn more about them (I didn’t have access to a CNC machine or 3D printer). Surely software has a role, but many FIRST teams (that I know of) seem to really doubt their Computer Vision skills (or lack thereof) or newly, Machine Learning skills. It makes more sense considering FIRST is a competition geared toward high school kids, who generally don’t have degrees or knowledge of complex/advanced CV or ML algorithms and how to implement them for fast-paced matches. Other than CV or ML, the only software I’ve seen (at Monta Vista’s Robotics Team) that needs to be done is “fine tuning control”. To this day I’m not really sure what Monta Vista’s Robotics team was doing when “fine-tuning control” since remote controlled movement shouldn’t be that hard to implement, especially when other teams open source their code on Github.&lt;/p&gt;

&lt;p&gt;I left the team in June of 2017. This was after we won 1st at the Arizona North Regional competition and 2nd at the International competition. I decided that it was time to leave and go on to do other software heavy robotics projects.&lt;/p&gt;

&lt;p&gt;A couple weeks after leaving, a nearby rival team, Valkyrie Robotics, asked me to head Computer Vision at their team. I accepted the role since most of the members were also Monta Vista students (there’s a huge backstory on Valkyrie’s existence and why it consists of only Monta Vista students). I spent a good amount of time there daily, but soon got caught in robotics drama. I left in July of 2017.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--btYIJq5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_5ZVH1Koxa8sG8Uw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--btYIJq5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_5ZVH1Koxa8sG8Uw9.png" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;&lt;em&gt;Picture from: &lt;/em&gt;&lt;a href="https://www.33rdsquare.com/2012/01/what-will-self-driving-cars-mean-for-us.html" rel="nofollow noopener noopener"&gt;&lt;em&gt;&lt;a href="https://www.33rdsquare.com/2012/01/what-will-self-driving-cars-mean-for-us.html"&gt;https://www.33rdsquare.com/2012/01/what-will-self-driving-cars-mean-for-us.html&lt;/a&gt;&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Moving into real Robotics&lt;/strong&gt;
&lt;/h1&gt;




&lt;p&gt;I don’t really think FIRST robotics is real, professional grade robotics. From my 500 hours there, I’ve learned that FIRST robotics at MVRT and Valkyrie isn’t anything more than plugging in some wires into an overpriced machine no more powerful than a $40 Raspberry Pi 3, called the roboRIO. There’s definitely some hardcore CAD that goes into the design every year and some other outreach programs you are required to toot to other teams to show that you care about your community, but that wasn’t enough to convince me to dedicate many hours of my day on a frequent basis to the team&lt;/p&gt;

&lt;p&gt;That’s why I moved onto implementing autonomy onto the RC car I’ve been modifying since 2013.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Playing with ROS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I started using ROS in August of 2017. Having some experience in C++ and Python, I started building ROS nodes to communicate with one another. One of the first things I did was interface two Turtlebots in a simulation to mirror one another’s movements. Surely, I had to follow the tutorial step by step to get to this stage, but it showed me what ROS could really do.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Teaching ROS to FIRST Robotics Teams&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After leaving the two FIRST teams I was in, I decided to teach ROS to FIRST teams in the Bay Area. Around 10 teams from around the Bay showed up (which I thought was pretty impressive). I taught them the basics and how they could implement them in their robotics. I even showed them a small demo using OpenCV, a Raspberry Pi 3, and a servo motor. Whenever a line on the whiteboard moved to the left or right, the servo would move that way as well. It was an example of publishers and subscribers with OpenCV integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Teaching ROS to Everyone&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I made a video a while back detailing everything you need to know on how to get started with ROS. I didn’t have time to edit and upload the hour-long video yet, but will soon.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Getting Started with IARRC&lt;/strong&gt;
&lt;/h1&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How I found out about IARRC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Same way I found out about AVR; Georgia Tech’s robotics team. I was interested in Georgia Tech so I decided to look around and see what their robotics team is doing (I’ve heard good reviews about the RoboJackets). I eventually found out about IARRC and was surprised to see it allowed high schoolers. What was even better was that it allowed solo competitors to join. This was crucial since I didn’t personally know of many friends who were interested in robotics and had some Machine Learning/Computer Science knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Initial Approach&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I spent 9 months working on ROS&lt;/p&gt;

&lt;p&gt;I had OpenCV integrated, a mix of basic and advanced algorithms to take care of different parts of the competition. I knew this was going to be pretty computationally expensive, but not so much that an RPI3 and Dragonboard 410c working in parallel would both be exhausted of memory and CPU power.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zjo_MTOX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_GcSK3GsoBJcAR-BX.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zjo_MTOX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_GcSK3GsoBJcAR-BX.jpg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;&lt;em&gt;Representation of what I had done. By: &lt;/em&gt;&lt;a href="http://www.yisystems.com/" rel="nofollow noopener noopener"&gt;&lt;em&gt;&lt;a href="http://www.yisystems.com"&gt;http://www.yisystems.com&lt;/a&gt;&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I couldn’t afford a $600 Nvidia TX2 board (even with the educational discount) and had no other option than to scrape the entire ROS codebase I had written tirelessly day and night&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Machine Learning Approach&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For some reason, I learned some Linear Algebra in Pre-Calculus, so I extended my knowledge of Linear Algebra by taking a look at the Deep Learning Book by Ian Goodfellow and other various tutorials. I quickly grew interested in the field of Machine Learning because I thought Generative Adversarial Networks (GANs) were pretty cool back in 2017 and even got invited to the Tensorflow Developer Summit back in March.&lt;/p&gt;

&lt;p&gt;Another reason I was interested in Machine Learning was the various ways I saw it being used to help others. It was being used to diagnose diseases in humans, animals, and plants as well as advanced science as shown in the Particle Tracking Challenge by CERN.&lt;/p&gt;

&lt;p&gt;So I started working on implementing Machine Learning in my RPI3 and Arduino stack since it was the only option that didn’t require heavy computation power.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Working on IARRC&lt;/strong&gt;
&lt;/h1&gt;




&lt;h4&gt;
  
  
  TEAM
&lt;/h4&gt;

&lt;p&gt;Just myself. Just a high schooler going against top-tier University teams who could be fully funded by their Universities and could have sponsorships.&lt;/p&gt;

&lt;h4&gt;
  
  
  HARDWARE / MECHANICAL
&lt;/h4&gt;

&lt;p&gt;Remember when I said I left MVRT because I didn’t like the mechanical/hardware part? I still don’t 100% love it, but I learned how to at least like it to build my robot. Hardware for me took the longest time, since designing parts and figuring out what components I needed took time and research. Stuff like what boards were compatible with I2C protocol was foreign to me a couple months ago, but now I think I’ve gotten a good of a grasp of the various protocols I used (and made) so that I can use it for brushed motor and servo motor control.&lt;/p&gt;

&lt;h4&gt;
  
  
  LEARNING CAD IN 3 DAYS
&lt;/h4&gt;

&lt;p&gt;Sometimes at the end of June, I realized that my noob Tinkercad skills weren’t going to help. I made a couple of rough models in Tinkercad, but there were several limitations with what you can do on a CAD software that’s purely online. After doing some research I found that Fusion360 was the best option to use. They had several tutorials for noobs who used Tinkercad in the past and wanted to get more functionality, like me.&lt;/p&gt;

&lt;p&gt;It took me a grueling 3 complete days and nights to learn Fusion360 from top to bottom.&lt;/p&gt;

&lt;p&gt;I’m not a pro yet, but I’ve learned most of the functionality there is to Fusion360 (I’m not familiar with all the keyboard shortcuts though). I used my knowledge to create a cost-efficient CAD model for my car. I created multiple flat layers to cut down costs while maintaining functionality for the robot.&lt;/p&gt;

&lt;p&gt;After a couple more days of designing and building, I finally built a model that incorporated all the components I needed for the robot.&lt;/p&gt;

&lt;h4&gt;
  
  
  3D PRINTING IN THE BAY AREA
&lt;/h4&gt;

&lt;p&gt;3D printing in the Bay Area is hard. Really hard. At least for me. I called over 30 printing services in the Bay Area and found no price lower than $3,000 for my model. Even spaces like HackerDojo or other makerspaces weren’t available or didn’t answer my calls or emails.&lt;/p&gt;

&lt;p&gt;With the death of TechShop, the only 3D printing service I knew of was the Sunnyvale Library. But with the dimensions of my model, there was no way I could get it to be printed. Even if my model was in the dimensions of their printer, the earliest date I found on the library’s website to when they were available to print was in November. That’s over a 5-month early booking!&lt;/p&gt;

&lt;p&gt;So I went with literally my last option, 3D Hubs. They allowed people with 3D printers from all over the world to print your item at a low cost. The only reason I was hesitant to buy from 3D Hubs was because every time I looked for a printer, I got prices between the range of $300-$500. Albeit this was 10 times as cheap as prices I got from shops from around the Bay, it was still out of my budget.&lt;/p&gt;

&lt;p&gt;Long story short, I got a 3D printing service order from Missouri. They gave me a reasonable price under $100 which I thought was okay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BDed6r1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_xAUc4HikFtvZIwfN.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BDed6r1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_xAUc4HikFtvZIwfN.jpg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;CAD Model of Upper Base for ETG&lt;/p&gt;

&lt;h4&gt;
  
  
  PUTTING IT TOGETHER
&lt;/h4&gt;

&lt;p&gt;I have a lot of experience in woodworking. I’ve moved to computer science since those times, but have a lot of experience in the field. I used my woodworking skills to build a structure that put all the CAD modules together.&lt;/p&gt;

&lt;h4&gt;
  
  
  SOFTWARE INTEGRATION
&lt;/h4&gt;

&lt;p&gt;Software Integration is the mean layer between Hardware/Mechanical and Software. It is also one of the biggest challenges I thought I had. I was right.&lt;/p&gt;

&lt;p&gt;There were times I went Youtube and saw really cool RC Cars driving around in circles or autonomously driving on its own. Every time, however, I got really puzzled on how they integrate their Python code into their car.&lt;/p&gt;

&lt;p&gt;This was still the case when trying to implement the Machine Learning Python code (previously ROS/OpenCV) into the car.&lt;/p&gt;

&lt;p&gt;I eventually taught myself I2C protocol and made my own protocol between the Arduino and RPI3 using the Pyserial library (I made my own protocol for faster and more reliable data transfer between the boards).&lt;/p&gt;

&lt;p&gt;PWM control wasn’t that hard after I had found the PCA9685. I used the Adafruit library to finally control the throttle and servo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d-svbAje--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_zRclaKU1Xn8qP3tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d-svbAje--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_zRclaKU1Xn8qP3tf.png" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;Network built for ETG&lt;/p&gt;

&lt;h4&gt;
  
  
  SOFTWARE
&lt;/h4&gt;

&lt;p&gt;Anything I explain here will be nothing more than copy and paste from what I have written in my paper for IARRC this year. Here’s a brief summary instead. I’ve also attached a link to my paper at the end of this section and the beginning of this blog post.&lt;/p&gt;

&lt;h4&gt;
  
  
  BRIEF SUMMARY
&lt;/h4&gt;

&lt;p&gt;I used Tensorflow to build a DCNN for Traffic Light Detection. I trained it on around 5,000 images that I pulled off Google Images. I had a 100 image test set and a 1,000 image validation set.&lt;/p&gt;

&lt;p&gt;I used Tensorflow to build a DCNN to predict Steering Wheel values. I trained it on around 50,000 images I collected during the pre-competition day they gave us. I had a 5,000 validation set and a 100 image test set.&lt;/p&gt;

&lt;p&gt;I used Tensorflow (I solely used Tensorflow, although I made a model in Caffe that had several problems running on the RPI3) to build a RNN/LSTM model to predict Throttle values. I explain this in my paper, but essentially it prevents rapid changes or stutters between drive loops. In other words, it smoothly increases and decreases speed instead of rapidly accelerating and deaccelerating at random instances. During testing, the DCNN model actually broke the camera model a couple of times (as well as the ESC for some reason).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yRWMRAtw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_jnDL5kb8Ym6aYLXG.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yRWMRAtw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_jnDL5kb8Ym6aYLXG.jpg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;&lt;em&gt;Salient Object Visualization. Video here: &lt;/em&gt;&lt;a href="https://www.youtube.com/watch?v=lFjsN7KcKIE" rel="nofollow noopener noopener"&gt;&lt;em&gt;&lt;a href="https://www.youtube.com/watch?v=lFjsN7KcKIE"&gt;https://www.youtube.com/watch?v=lFjsN7KcKIE&lt;/a&gt;&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  I EXPLAIN A LOT FURTHER OF WHAT I DID IN MY PAPER.
&lt;/h4&gt;

&lt;h4&gt;
  
  
  GRAB A COPY AT THE FOLLOWING LOCATIONS:
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Publication&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arXiv&lt;/td&gt;
&lt;td&gt;&lt;a href="https://arxiv.org/abs/1807.08233"&gt;https://arxiv.org/abs/1807.08233&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Surya Dantuluri Paper Archive&lt;/td&gt;
&lt;td&gt;&lt;a href="https://sdan.cc/archive/1807.08233.pdf"&gt;https://sdan.cc/archive/1807.08233.pdf&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ResearchGate&lt;/td&gt;
&lt;td&gt;&lt;a href="https://goo.gl/GoEj9s"&gt;https://goo.gl/GoEj9s&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Traveling&lt;/strong&gt;
&lt;/h1&gt;




&lt;h2&gt;
  
  
  Airborne
&lt;/h2&gt;

&lt;p&gt;I wasn’t done with the robot when traveling. I started having memory issues on the RPI3 when running the main Python script for more than a minute. This panicked me throughout the flight. This even prompted me to continue coding throughout the extent of the overnight flight.&lt;/p&gt;

&lt;p&gt;I eventually fixed the errors, but in a way that was not stable at all. These final days before the competition reminded me of times before a hackathon’s deadline.&lt;/p&gt;

&lt;p&gt;I had no sleep for the last week before the competition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Landing in Canada
&lt;/h2&gt;

&lt;p&gt;The humidity difference hit me hard. Last time I’ve experienced humidity levels that high was in India and Singapore. People also seemed like they were trying to escape into an AC equipped space fast. I’ve heard the temperature I experienced was an anomaly to temperatures generally in Canada, which makes more sense.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Raceday&lt;/strong&gt;
&lt;/h1&gt;




&lt;h2&gt;
  
  
  Thursday
&lt;/h2&gt;

&lt;p&gt;I was in Waterloo by Thursday. My mom and I went around the huge University of Waterloo Campus. It was as big as a city! There were new building everywhere and a big student population still roaming around during the summer.&lt;/p&gt;

&lt;p&gt;I even decided to look at the parking lot where I would be spending the next two days racing my ETG robot.&lt;/p&gt;

&lt;p&gt;Unfamiliar with Canada, University of Waterloo, and its students, I treaded the Engineering Building 5 with caution. I found a team who looked surprisingly young testing their bot in the building but didn’t believe that they were doing it for IARRC. I didn’t know you could freely stay in the building and work on your project at the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Friday
&lt;/h2&gt;

&lt;p&gt;That team I saw on Thursday soon became friends with me on Friday. They had a somewhat basic OpenCV + Arduino robot, similar to many University level robots. I have to say they had some high-quality people on the team who knew everything about their bot and seemed genuinely passionate about their bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qI9wQD98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_2OipxHbwykMPpKkr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qI9wQD98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.suryad.com/content/images/2019/08/0_2OipxHbwykMPpKkr.jpg" alt="Building a Self Driving Car using Machine Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Initially, my robot was doing well roaming around. I was happy that at least the basic manual control worked over SSH. Then I started having the worst problem that I imagined.&lt;/p&gt;

&lt;p&gt;The ESC broke.&lt;/p&gt;

&lt;p&gt;Or at least I thought it did. I heard Ottabotics, in their words “quit” after their ESC broke, so I thought this was the end of the road for me. Surely, I hope it wasn’t since I came from over 4,200 kilometers away, and I hoped a simple $50 piece of hardware didn’t completely ruin my robot.&lt;/p&gt;

&lt;p&gt;Perseverance was one of the things a lot of the judges and organizers told me that they saw in me. I thought the same thing as well. I came upon two huge hurdles during testing days and somehow managed to debug and fix the issue in record time. Regarding the ESC, it took a lot of debugging and electrical engineering knowledge to finally figure out that an external power supply was not needed to power the power-heavy servo motor.&lt;/p&gt;

&lt;p&gt;Debugging the ESC took me 3 hours of tireless work. Then I started collecting data. I knew that collecting track data was the only good use of my time and battery power instead of taking the ETG on joy-rides. I collected around 40,000 good images, or so I thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  Saturday
&lt;/h2&gt;

&lt;p&gt;Given how chill the University and High School teams were (in comparison to any competition in general in the Bay Area for High Schoolers) I was also pretty relaxed, but still ready to go.&lt;/p&gt;

&lt;p&gt;I just trained all my models on a cluster of Nvidia GPUs for an hour. I had gotten good loss scores so I thought my car was ready for the track.&lt;/p&gt;

&lt;p&gt;When I initially tried it out, I got an error about some dimensionality issues between the model structure I was running on the car and on the GPU cluster. I didn’t think it was that big of an issue because I probably didn’t update my RPI3 software&lt;/p&gt;

&lt;h2&gt;
  
  
  Github
&lt;/h2&gt;

&lt;p&gt;Github started my coding career. However, my lack of organization during this process (as shown in my contribution history over the past two months) caused me to train a model that was on a different branch. It took me an hour of debugging to find this issue on the ground in the parking lot. For some reason, the middle of the parking lot was the best place where I could connect to the internet in Building 5 and have a connection to the RPI3 over SSH.&lt;/p&gt;

&lt;p&gt;After figuring out the issue, I started training my model in the rain. It started raining for some odd reason, even though it was pretty hot and sunny outside. I trained another model just to see if I could get a lower loss score (lower loss score but not so much that it’s overfitting).&lt;/p&gt;

&lt;p&gt;It was unconventional already to be one of the only high schoolers there, but training my model on the parking lot was essentially everything all the other teams did with OpenCV and ROS over the past year.&lt;/p&gt;

&lt;p&gt;That’s something I still can’t wrap my head around to this day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clockwise not Counterclockwise
&lt;/h2&gt;

&lt;p&gt;So I trained all the data I was talking about for the car to go around the circuit and drag races in a counterclockwise direction. I learned soon that the race would go in a clockwise fashion.&lt;/p&gt;

&lt;p&gt;I spent the next 3 hours pre-processing all the data I had. I whipped out some rusty Java skills I had from APCS and some Pillow documentation in order to flip angle values in the 40,000+ metadata files and flip the 40,000+ images horizontally so that I could train the model to go clockwise.&lt;/p&gt;

&lt;p&gt;By the time I was done training this model, 2nd circuit races had finished and I couldn’t log a much better score. I tested it before they removed the track and it did quite well, possibly allowing me to place top 5.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ending
&lt;/h2&gt;

&lt;p&gt;I did my matches pretty late. I didn’t even think I should’ve done the 2nd circuit race at the time because I was sure I was far behind everyone else. Only now do I regret that decision.&lt;/p&gt;

&lt;p&gt;Everyone cleaned up their materials and was ready to get the decisions for the match.&lt;/p&gt;

&lt;p&gt;The results were pretty obvious, with Poland getting 1st, VAUL getting 2nd, and a team from Thailand getting 3rd.&lt;/p&gt;

&lt;p&gt;Finals scores came out a week later. I came in 9th place up against University level teams. I outscored Georgia Tech and a couple other Canadian University teams.&lt;/p&gt;

&lt;p&gt;Outscoring Georgia Tech was something I never expected.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Conclusions and My Limitations&lt;/strong&gt;
&lt;/h1&gt;




&lt;p&gt;This year-long journey was a long, strenuous, emotional, and most of all fun adventure. I had many dreams as a middle schooler and this was one of the ones I overtime didn’t think I could achieve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outscoring Georgia Tech on my own
&lt;/h2&gt;

&lt;p&gt;This is something I thought would be impossible.&lt;/p&gt;

&lt;p&gt;I lived in an apartment. Which mean I didn’t have a garage. Not even power tools.&lt;/p&gt;

&lt;p&gt;I never had a mentor or coach&lt;/p&gt;

&lt;p&gt;I didn’t have enough money to buy those coaches every friend I know does for their research projects.&lt;/p&gt;

&lt;p&gt;So I created a 100% original and organic research paper out of my grit and passion for Computer Science and Robotics.&lt;/p&gt;

&lt;p&gt;I self-taught myself everything I explained in this blog post and paper.&lt;/p&gt;

&lt;h2&gt;
  
  
  _ &lt;strong&gt;So to say that I outscored Georgia Tech on my own is a testament of my work and passion.&lt;/strong&gt; _
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Ending the Journey this Year
&lt;/h2&gt;

&lt;p&gt;I’ve written over 1.9 million lines of code in the past 1 and a half years I’ve learned to code. Some might think I’m crazy, but I think coding is addicting. I’ve decided to hopefully take a break for the rest of this year starting in September (unless I go to PennApps) so that I can catch up with my own life outside of CS. Hopefully, this will give me time to think of new ideas and new approaches to problems I’ve encountered in my code right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways from IARRC 2018
&lt;/h2&gt;

&lt;p&gt;I haven’t run post-project tests on the IARRC repo I made yet. These tests give me some statistics of various statistics of the repo. I’m guessing I wrote an upwards of around 10,000 lines of code, including code that I didn’t use on the vehicle during the competition.&lt;/p&gt;

&lt;p&gt;As I’ve explained in my paper, I’ve found that machine learning methods can actually rival traditional computationally expensive computer vision, path planning, and localization algorithms. With this proof of concept, I’m planning on allowing the ETG 2 to learn on its own (hint: definitely not RL).&lt;/p&gt;

&lt;p&gt;Canadians seem really nice and understanding and the IARRC competition was held without any flaws. International teams didn’t seem like they were competing with one another, rather just having fun driving around little robots. Teams also collaborated and helped each other out during the competition. This is what research and advancement in fields such as self-driving cars should look like; having fun, competing, and collaborating.&lt;/p&gt;

&lt;p&gt;I’m definitely coming back to meet these extremely diverse, passionate, and compassionate teams next year to have some fun.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Acknowledgements (time sequential)&lt;/strong&gt;
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Caleb Kirksey

&lt;ul&gt;
&lt;li&gt;He gave me some initial help and encouragement&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;kangalow from JetsonHacks

&lt;ul&gt;
&lt;li&gt;He gave me some advice on what ESC to choose&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Mohamed Alzaki

&lt;ul&gt;
&lt;li&gt;He gave me some advice on how he implemented Ultrasonic Sensors into his robot&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Tawn Kramer

&lt;ul&gt;
&lt;li&gt;He gave me advice on how to integrate Hardware with Software in his videos
&amp;lt;!--kg-card-end: markdown--&amp;gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
  </channel>
</rss>
