<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Paul DeCarlo</title>
    <description>The latest articles on Forem by Paul DeCarlo (@toolboc).</description>
    <link>https://forem.com/toolboc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/toolboc"/>
    <language>en</language>
    <item>
      <title>How GPU-Powered Coding Agents Can Assist in Development of GPU-Accelerated Software</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Sun, 01 Mar 2026 03:36:32 +0000</pubDate>
      <link>https://forem.com/toolboc/how-gpu-powered-coding-agents-can-assist-in-development-of-gpu-accelerated-software-4fhk</link>
      <guid>https://forem.com/toolboc/how-gpu-powered-coding-agents-can-assist-in-development-of-gpu-accelerated-software-4fhk</guid>
      <description>&lt;h1&gt;
  
  
  How GPU-Powered Coding Agents Can Assist in Development of GPU-Accelerated Software
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hhxqsh7b8ttm9wacefd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hhxqsh7b8ttm9wacefd.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dream: Transcribe Your Entire Media Library on a Device That Fits in Your Hand
&lt;/h2&gt;

&lt;p&gt;Imagine owning a massive Plex media library with hundreds of foreign-language films and TV shows. You want subtitles for everything, but manually sourcing them is a nightmare — mismatched timings, missing translations, incomplete coverage. Tools like &lt;a href="https://www.bazarr.media/" rel="noopener noreferrer"&gt;Bazarr&lt;/a&gt; exist specifically to automate subtitle management for Plex and Sonarr/Radarr libraries, and they ship with built-in integration for &lt;a href="https://github.com/ahmetoner/whisper-asr-webservice" rel="noopener noreferrer"&gt;whisper-asr-webservice&lt;/a&gt; — a self-hosted REST API that wraps OpenAI's Whisper speech recognition model. Point Bazarr at a whisper-asr-webservice endpoint, and it will automatically transcribe and generate subtitles for every piece of media in your library, in any language Whisper supports.&lt;/p&gt;

&lt;p&gt;There's just one problem: running Whisper fast enough to be practical requires a GPU, and the existing Docker images only support x86_64 with NVIDIA desktop or server GPUs. If you want a quiet, power-efficient, always-on transcription appliance — something you can tuck behind your NAS and forget about — the NVIDIA Jetson platform is the obvious choice. An Orin Nano draws under 15 watts, fits in the palm of your hand, and packs a 1024-core Ampere GPU with hardware support for the same CUDA operations that Whisper needs. A single portable Docker container running on a Jetson could silently chew through your entire library in the background, generating subtitles on demand whenever new media arrives.&lt;/p&gt;

&lt;p&gt;The question was: could we actually build that container? The answer turned out to be a story about how GPU-powered AI coding agents can come full circle — using GPU-accelerated tools to build GPU-accelerated software for GPU-accelerated hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Historical Pain of Porting to aarch64
&lt;/h2&gt;

&lt;p&gt;Anyone who has tried to compile PyTorch, CTranslate2, or onnxruntime for ARM hardware knows the pain. The Python AI/ML ecosystem was born on x86_64 Linux and macOS, and its package infrastructure carries deep assumptions about that lineage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PyTorch&lt;/strong&gt; is the foundation of nearly every modern speech recognition system. On x86, you &lt;code&gt;pip install torch&lt;/code&gt; and get a CUDA-enabled wheel in seconds. On aarch64, that same command gives you a CPU-only build — or nothing at all. For years, getting a CUDA-enabled PyTorch on Jetson meant manually compiling from source against NVIDIA's JetPack SDK, a process that could take hours on the device itself and was fragile across JetPack versions. NVIDIA eventually began publishing pre-built wheels through the &lt;a href="https://pypi.jetson-ai-lab.io" rel="noopener noreferrer"&gt;Jetson AI Lab pip index&lt;/a&gt;, but using them correctly requires understanding a subtle and underdocumented packaging conflict: pip's wheel compatibility sorting prefers &lt;code&gt;manylinux_2_28&lt;/code&gt; tags over &lt;code&gt;linux_aarch64&lt;/code&gt;, which means if both PyPI and the Jetson index are available as pip sources, pip will happily install the CPU-only PyPI wheel instead of the CUDA-enabled Jetson wheel. You must use &lt;code&gt;--index-url&lt;/code&gt; (making Jetson the &lt;em&gt;primary&lt;/em&gt; source), not &lt;code&gt;--extra-index-url&lt;/code&gt; (which makes it secondary).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CTranslate2&lt;/strong&gt;, the inference backend that faster-whisper uses to run Whisper models efficiently, is another casualty. PyPI publishes aarch64 wheels, but they are CPU-only. There is no CUDA-enabled aarch64 wheel. Getting GPU acceleration on Jetson means compiling CTranslate2 from source with &lt;code&gt;-DWITH_CUDA=ON&lt;/code&gt;, linking against the JetPack CUDA toolkit, and targeting the correct CUDA compute capability for your specific Jetson hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Poetry&lt;/strong&gt;, the dependency manager used by whisper-asr-webservice, adds another layer of complexity. Poetry's resolver has no concept of "this package must come from this alternate index for this platform." When you run &lt;code&gt;poetry install&lt;/code&gt;, it merges all dependency specifications and resolves them against PyPI. On Jetson, this means Poetry will cheerfully overwrite your carefully pre-installed CUDA-enabled PyTorch with a CPU-only wheel from PyPI, because the version constraint matches and Poetry doesn't know the difference. The project's &lt;code&gt;poetry-core&lt;/code&gt; PEP 517 metadata generation also merges &lt;code&gt;[tool.poetry.dependencies]&lt;/code&gt; source mappings with &lt;code&gt;[project.optional-dependencies]&lt;/code&gt;, producing version constraints like &lt;code&gt;torch==2.7.1+cu126&lt;/code&gt; that don't match the actual Jetson wheel version at all.&lt;/p&gt;

&lt;p&gt;These are not exotic edge cases. They are the &lt;em&gt;default experience&lt;/em&gt; of trying to port GPU-accelerated Python software to aarch64. And they are exactly the kind of deeply contextual, multi-layered problems that AI coding agents excel at navigating.&lt;/p&gt;

&lt;h2&gt;
  
  
  CUDA Architecture on L4T: The Edge Cases That x86 Takes for Granted
&lt;/h2&gt;

&lt;p&gt;NVIDIA's Linux for Tegra (L4T) is the OS layer that underpins JetPack on Jetson devices. While x86 CUDA development benefits from a relatively uniform environment — install the CUDA toolkit, install the driver, compile for &lt;code&gt;sm_70&lt;/code&gt; through &lt;code&gt;sm_90&lt;/code&gt; and let the JIT handle the rest — Jetson development requires precise awareness of the hardware-software matrix:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Jetson Generation&lt;/th&gt;
&lt;th&gt;Compute Capability&lt;/th&gt;
&lt;th&gt;L4T Branch&lt;/th&gt;
&lt;th&gt;JetPack&lt;/th&gt;
&lt;th&gt;CUDA&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Nano / TX2&lt;/td&gt;
&lt;td&gt;sm_53 / sm_62&lt;/td&gt;
&lt;td&gt;R32.x&lt;/td&gt;
&lt;td&gt;4.x&lt;/td&gt;
&lt;td&gt;10.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xavier NX / AGX&lt;/td&gt;
&lt;td&gt;sm_72&lt;/td&gt;
&lt;td&gt;R35.x&lt;/td&gt;
&lt;td&gt;5.x&lt;/td&gt;
&lt;td&gt;11.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Orin Nano / NX / AGX&lt;/td&gt;
&lt;td&gt;sm_87&lt;/td&gt;
&lt;td&gt;R36.x&lt;/td&gt;
&lt;td&gt;6.x&lt;/td&gt;
&lt;td&gt;12.6&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On x86, compiling CTranslate2 with &lt;code&gt;-DCUDA_ARCH_LIST="7.0;7.5;8.0;8.6;8.9;9.0"&lt;/code&gt; covers virtually every GPU from 2017 to 2024. On Jetson, you compile for exactly one architecture — &lt;code&gt;8.7&lt;/code&gt; for Orin — and the base image must match your JetPack version precisely because the CUDA toolkit, cuDNN, and TensorRT are all provided by the L4T base image rather than installed separately.&lt;/p&gt;

&lt;p&gt;There's also the cuBLAS conflict problem: the JetPack base image ships a system cuBLAS in &lt;code&gt;/usr/local/cuda/lib64&lt;/code&gt;. When you install &lt;code&gt;nvidia-cudss-cu12&lt;/code&gt; from pip (required because the Jetson PyTorch wheel links against &lt;code&gt;libcudss.so.0&lt;/code&gt;), it pulls in &lt;code&gt;nvidia-cublas-cu12&lt;/code&gt; as a transitive dependency. Loading two different versions of cuBLAS at runtime causes &lt;code&gt;CUBLAS_STATUS_ALLOC_FAILED&lt;/code&gt; — a cryptic error that only manifests when the model actually tries to run a matrix multiplication on the GPU. The fix is to uninstall the pip cuBLAS immediately after installing cudss, and ensure &lt;code&gt;LD_LIBRARY_PATH&lt;/code&gt; does &lt;em&gt;not&lt;/em&gt; include the pip cublas lib directory.&lt;/p&gt;

&lt;p&gt;These are the kinds of platform-specific gotchas that would take a human developer hours of Stack Overflow browsing and GitHub issue trawling to diagnose. An AI coding agent with knowledge of the Jetson ecosystem can identify and resolve them in the flow of a single conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup: VS Code, Claude Opus 4.6, and Source Code
&lt;/h2&gt;

&lt;p&gt;The ingredients for this solution were deliberately minimal. We used &lt;strong&gt;VS Code&lt;/strong&gt; as the development environment, outfitted with &lt;strong&gt;GitHub Copilot powered by Claude Opus 4.6&lt;/strong&gt; as the AI coding agent, with the &lt;strong&gt;whisper-asr-webservice source code&lt;/strong&gt; cloned locally on the Jetson device itself. That's it — an editor, a model, and a codebase. No specialized Jetson development tools, no cross-compilation toolchains, no reference implementations to copy from.&lt;/p&gt;

&lt;p&gt;What made this combination potent was the intersection of three capabilities: Claude Opus 4.6's deep knowledge of CUDA toolchains, Python packaging, and Docker multi-stage builds; VS Code's integrated terminal giving the agent direct access to build and test on the target hardware; and the source code providing the agent full visibility into the project's dependency structure, build system, and runtime architecture. The agent could read &lt;code&gt;pyproject.toml&lt;/code&gt; to understand Poetry's dependency graph, inspect the existing x86 Dockerfiles for patterns, examine the application code to understand which libraries each ASR engine imports, and then synthesize all of that into a Jetson-specific build — all within a single conversational session.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Prompt That Started It All
&lt;/h2&gt;

&lt;p&gt;The session began with a single natural-language prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Bro, I need some help, is there a way you might be able to figure out how to build this project in a Docker container with support for GPU acceleration on the NVIDIA Jetson hardware we are currently running on.  Specifically, wthe openai-whisper, whisperx, and faster-whisper dependencies are going to need to be built from source to include acceleration on this device.  Poetry is going to annoy you because the trick will be solving the additional dependencies without breaking the full project.  The resulting solution should be a single docker file that builds specifically on Jetson.  This is going to be tough, can you try?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's it. No architecture document. No step-by-step instructions. No prior Dockerfile to copy from. The agent needed to understand the project structure, identify which components required platform-specific builds, design a multi-stage Dockerfile strategy, and navigate every one of the compatibility landmines described above.&lt;/p&gt;

&lt;p&gt;The resulting &lt;code&gt;Dockerfile.jetson&lt;/code&gt; is nearly 400 lines of carefully sequenced build steps with extensive documentation explaining &lt;em&gt;why&lt;/em&gt; each decision was made — not just what it does. The three-stage build strategy emerged organically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Stage 1&lt;/strong&gt;: Compile CTranslate2 from source with CUDA support, targeting &lt;code&gt;sm_87&lt;/code&gt; for Orin&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stage 2&lt;/strong&gt;: Extract Swagger UI static assets from the x86 swagger-ui image (only static JS/CSS, no binaries)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stage 3&lt;/strong&gt;: Assemble the runtime — pre-install CUDA packages from the Jetson AI Lab index, install remaining Python dependencies with constraints protecting CUDA packages, apply compatibility shims&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Beyond Code: Testing Containers from Inside VS Code
&lt;/h2&gt;

&lt;p&gt;One of the most powerful aspects of working with an AI coding agent in VS Code is that the agent is not limited to writing code. It has access to the terminal, which means it can build Docker images, run containers, generate test data, hit HTTP endpoints, inspect logs, and tear down environments — all within the same conversation.&lt;/p&gt;

&lt;p&gt;This is a fundamentally different paradigm from traditional code generation. The agent doesn't hand you a Dockerfile and say "try building this." It builds the image itself, watches for errors, diagnoses failures, applies fixes, and rebuilds — iteratively, in real-time, on the actual target hardware.&lt;/p&gt;

&lt;p&gt;During our session, the agent executed commands like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the image on the Jetson&lt;/span&gt;
docker build &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile.jetson &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; whisper-asr-webservice-jetson:jp6.1-cu12.6-py3.10 &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Start a container with a specific ASR engine&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; fw-test &lt;span class="nt"&gt;--runtime&lt;/span&gt; nvidia &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_ENGINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;faster_whisper &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiny &lt;span class="se"&gt;\&lt;/span&gt;
  whisper-asr-webservice-jetson:latest

&lt;span class="c"&gt;# Test the endpoint with a real audio file&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"http://localhost:9000/asr?task=transcribe&amp;amp;language=en&amp;amp;output=json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"accept: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="s2"&gt;"audio_file=@/tmp/test_speech.wav"&lt;/span&gt;

&lt;span class="c"&gt;# Inspect CUDA availability inside the container&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; nvidia whisper-asr-webservice-jetson:latest &lt;span class="se"&gt;\&lt;/span&gt;
  python3 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"import torch; print(torch.cuda.is_available(), torch.cuda.get_device_name(0))"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a build failed — and across 15+ iterations, many did — the agent read the error output, identified the root cause, modified the Dockerfile, and rebuilt. When a runtime crash occurred, it inspected the Python traceback, traced the issue to a specific library version incompatibility, and created a monkey-patch shim. The entire feedback loop happened inside the VS Code terminal, with the agent operating as both developer and QA engineer simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Testing Strategy: Three Engines, Generated Speech, Real Validation
&lt;/h2&gt;

&lt;p&gt;Whisper-asr-webservice supports three different ASR backends, each with different runtime dependencies, model loading paths, and GPU code paths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;faster_whisper&lt;/strong&gt; — Uses CTranslate2 for optimized inference, requires a CUDA-compiled CTranslate2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;openai_whisper&lt;/strong&gt; — The original OpenAI implementation, uses PyTorch directly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;whisperx&lt;/strong&gt; — Extends Whisper with word-level timestamps and speaker diarization via pyannote.audio, requires torchaudio and the HuggingFace pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All three needed to work. A container that only runs one engine is only one-third of a solution.&lt;/p&gt;

&lt;p&gt;The testing strategy evolved through an important self-correction. The initial approach was to generate a simple test audio file — a sine wave tone — and POST it to the ASR endpoint. This produced a "successful" HTTP 200 response, but the transcription result was empty or garbage because there was no actual speech in the audio. The test was passing but not actually validating anything meaningful.&lt;/p&gt;

&lt;p&gt;The agent recognized this limitation and pivoted: instead of a synthetic tone, it used &lt;code&gt;espeak-ng&lt;/code&gt; (a text-to-speech engine available on the system) to generate a WAV file containing actual spoken English:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;espeak-ng &lt;span class="nt"&gt;-w&lt;/span&gt; /tmp/test_speech.wav &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"The quick brown fox jumps over the lazy dog"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This produced a test file with clear, recognizable speech. When the ASR engines transcribed it, the response contained actual words that could be verified — not just an HTTP status code, but semantic validation that the speech recognition pipeline was functioning end to end, from audio input through GPU-accelerated model inference to text output.&lt;/p&gt;

&lt;p&gt;Each engine was tested individually by spinning up a fresh container with the appropriate &lt;code&gt;ASR_ENGINE&lt;/code&gt; environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test faster_whisper&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; fw-test &lt;span class="nt"&gt;--runtime&lt;/span&gt; nvidia &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_ENGINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;faster_whisper &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiny &lt;span class="se"&gt;\&lt;/span&gt;
  whisper-asr-webservice-jetson:latest
&lt;span class="c"&gt;# Wait for model download + startup, then curl, then teardown&lt;/span&gt;

&lt;span class="c"&gt;# Test openai_whisper&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; ow-test &lt;span class="nt"&gt;--runtime&lt;/span&gt; nvidia &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_ENGINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;openai_whisper &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiny &lt;span class="se"&gt;\&lt;/span&gt;
  whisper-asr-webservice-jetson:latest

&lt;span class="c"&gt;# Test whisperx&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; wx-test &lt;span class="nt"&gt;--runtime&lt;/span&gt; nvidia &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_ENGINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;whisperx &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ASR_MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiny &lt;span class="se"&gt;\&lt;/span&gt;
  whisper-asr-webservice-jetson:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All three returned successful transcriptions with recognizable text. The whisperx engine additionally returned word-level timestamps, confirming that the torchaudio compatibility shim was working correctly and pyannote's audio processing pipeline was intact.&lt;/p&gt;

&lt;p&gt;This test cycle was repeated three times across the session — after the initial build, after the torch.load compatibility fix, and after the huggingface_hub API fix — ensuring that each patch didn't break previously working functionality. The agent managed all of this autonomously: spinning up containers, waiting for startup, sending requests, validating responses, tearing down containers, and reporting results.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compatibility Shims: When Libraries Disagree
&lt;/h2&gt;

&lt;p&gt;Three runtime compatibility issues surfaced during testing, each requiring a different kind of fix. Rather than forking upstream libraries or pinning to ancient versions, the agent created a unified compatibility shim — a single Python file loaded at interpreter startup via a &lt;code&gt;.pth&lt;/code&gt; file in &lt;code&gt;site-packages&lt;/code&gt;. This approach is surgical: it patches only what's broken, at the earliest possible moment, without modifying any installed package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. torchaudio API removal&lt;/strong&gt;: The Jetson AI Lab torchaudio builds strip out the legacy backend API — &lt;code&gt;AudioMetaData&lt;/code&gt;, &lt;code&gt;info()&lt;/code&gt;, and &lt;code&gt;list_audio_backends()&lt;/code&gt; — because the Jetson builds use a different audio backend architecture. But pyannote.audio 3.x still calls these functions. The shim implements them using the &lt;code&gt;soundfile&lt;/code&gt; library, which is available and functional on Jetson.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. torch.load weights_only default&lt;/strong&gt;: PyTorch 2.6+ changed &lt;code&gt;torch.load()&lt;/code&gt; to default to &lt;code&gt;weights_only=True&lt;/code&gt; for security. But pyannote's VAD (Voice Activity Detection) model checkpoints contain &lt;code&gt;omegaconf.ListConfig&lt;/code&gt; objects that aren't in the allowlist. The tricky part: &lt;code&gt;lightning_fabric&lt;/code&gt; passes &lt;code&gt;weights_only=None&lt;/code&gt; explicitly, which PyTorch interprets as &lt;code&gt;True&lt;/code&gt;. A simple &lt;code&gt;setdefault&lt;/code&gt; doesn't work — you have to check &lt;code&gt;if kwargs.get("weights_only") is None&lt;/code&gt; and override it. The agent discovered this subtlety by reading the actual traceback and tracing through the call chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. huggingface_hub API deprecation&lt;/strong&gt;: &lt;code&gt;huggingface_hub&lt;/code&gt; 1.5.0 removed the deprecated &lt;code&gt;use_auth_token&lt;/code&gt; parameter entirely, but &lt;code&gt;pyannote.audio&lt;/code&gt; 3.4.0 and &lt;code&gt;whisperx&lt;/code&gt; still pass &lt;code&gt;use_auth_token=&lt;/code&gt; instead of &lt;code&gt;token=&lt;/code&gt;. The fix required patching not just &lt;code&gt;huggingface_hub.hf_hub_download&lt;/code&gt; in the top-level namespace, but also in submodules like &lt;code&gt;huggingface_hub.file_download&lt;/code&gt; — because pyannote does &lt;code&gt;from huggingface_hub import hf_hub_download&lt;/code&gt; at module level, which copies the function reference before any top-level patch can take effect. The shim pre-imports and patches the submodules so that when pyannote's &lt;code&gt;from&lt;/code&gt; import runs, it picks up the already-patched version.&lt;/p&gt;

&lt;p&gt;Each of these fixes emerged from the agent observing a runtime failure, diagnosing the root cause by inspecting library source code inside the running container, and implementing the minimal patch needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Working Code to Pull Request — By Prompting
&lt;/h2&gt;

&lt;p&gt;With the container built, tested, and verified across all three engines, the next step was contributing back upstream. The project had open issues requesting exactly this capability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Issue #359&lt;/strong&gt;: "Add Arm support for GPU container"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue #54&lt;/strong&gt;: "Possible to run on Jetson Nano?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue #133&lt;/strong&gt;: "Is it possible to get this on Jetson using the GPU?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Forking the repository, creating a feature branch, committing the changes, and opening a pull request was accomplished entirely through prompts:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Fork the repo and make a pull request."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent forked &lt;code&gt;ahmetoner/whisper-asr-webservice&lt;/code&gt; to &lt;code&gt;toolboc/whisper-asr-webservice&lt;/code&gt;, created a &lt;code&gt;feat/jetson-gpu-support&lt;/code&gt; branch, committed the Dockerfile and compose file, pushed to the fork, and opened &lt;a href="https://github.com/ahmetoner/whisper-asr-webservice/pull/364" rel="noopener noreferrer"&gt;PR #364&lt;/a&gt; with a detailed description including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A summary of the target platform and what's included&lt;/li&gt;
&lt;li&gt;A table of key technical decisions with rationale for each&lt;/li&gt;
&lt;li&gt;Verification results from actual hardware testing&lt;/li&gt;
&lt;li&gt;Build and run instructions&lt;/li&gt;
&lt;li&gt;A link to the pre-built Docker Hub image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When additional fixes were made (cuBLAS conflict, torch.load, huggingface_hub), each was committed with a descriptive message and pushed to update the PR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;937e2a8 feat: add NVIDIA Jetson GPU support (Dockerfile + compose)
290ffcd fix: remove conflicting pip cuBLAS to fix CUBLAS_STATUS_ALLOC_FAILED
d7096fa fix: patch torch.load for whisperx/pyannote VAD compatibility
19ef291 chore: add container_name to compose file
a6e731b fix: patch huggingface_hub use_auth_token -&amp;gt; token for HF_TOKEN support
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later, when asked to reference the relevant upstream issues, the agent searched the issue tracker, identified the three related issues, and updated the PR description with &lt;code&gt;Closes #359&lt;/code&gt; and &lt;code&gt;Relates to #54, #133&lt;/code&gt;. The resulting PR is more thorough than most manually created pull requests — every technical decision is documented, every compatibility workaround is explained, and the testing methodology is clear.&lt;/p&gt;

&lt;p&gt;The image was also pushed to Docker Hub as &lt;code&gt;toolboc/whisper-asr-webservice-jetson:jp6.1-cu12.6-py3.10&lt;/code&gt;, making it immediately available to anyone with a Jetson device — no build required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Circle: GPUs Building Software for GPUs
&lt;/h2&gt;

&lt;p&gt;There's a satisfying symmetry in this story. The AI coding agent that designed, built, debugged, and tested this container is itself powered by GPU-accelerated inference. The end product — a Docker container running Whisper on Jetson's GPU — is GPU-accelerated software. And the problems we solved — CUDA compute capabilities, cuBLAS library conflicts, GPU-specific wheel selection — are fundamentally GPU problems.&lt;/p&gt;

&lt;p&gt;This is what "GPU-assisted development comes full circle" looks like in practice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A GPU-powered agent&lt;/strong&gt; (the LLM) understands the nuances of CUDA architecture, library ABI compatibility, and platform-specific packaging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It produces GPU-accelerated software&lt;/strong&gt; (the Jetson Whisper container) that exploits the target hardware's full capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It validates the result on actual GPU hardware&lt;/strong&gt; by running containers, executing CUDA operations, and verifying inference output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It contributes the solution back&lt;/strong&gt; to the open-source community through a well-documented pull request&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The agent didn't just write a Dockerfile. It navigated a maze of platform-specific incompatibilities that have historically been the domain of specialized embedded engineers with deep knowledge of the NVIDIA toolchain. It did this while simultaneously managing Docker builds, generating test data, running HTTP integration tests, managing git workflows, and producing documentation — tasks that span the full spectrum from systems engineering to technical writing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Unlocks
&lt;/h2&gt;

&lt;p&gt;The Jetson platform isn't just for Whisper. The same challenges we solved here — CUDA compilation, pip index conflicts, Poetry resolver workarounds, torchaudio compatibility — apply to virtually every PyTorch-based project that someone wants to run on edge hardware. The pattern is repeatable:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify which dependencies need platform-specific builds&lt;/li&gt;
&lt;li&gt;Source or compile CUDA-enabled versions for the target architecture&lt;/li&gt;
&lt;li&gt;Constrain the package manager to prevent overwriting with CPU-only alternatives&lt;/li&gt;
&lt;li&gt;Shim any API incompatibilities between library versions&lt;/li&gt;
&lt;li&gt;Test on actual hardware with meaningful validation data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An AI coding agent that understands this pattern can port other projects to Jetson — or to any constrained platform — with dramatically less effort than manual development. The developer's role shifts from "figure out why &lt;code&gt;pip install torch&lt;/code&gt; gives me a CPU-only wheel on aarch64" to "build this for Jetson" and validating the result.&lt;/p&gt;

&lt;p&gt;For the Plex and home media server community specifically, this means a standalone appliance that generates subtitles automatically for any content in any language. Drop a Jetson Orin Nano next to your NAS, run &lt;code&gt;docker compose -f docker-compose.jetson.yml up&lt;/code&gt;, point Bazarr at &lt;code&gt;http://jetson:9000&lt;/code&gt;, and every new movie or episode that arrives gets transcribed and subtitled without human intervention. All on a device that draws less power than a light bulb.&lt;/p&gt;

&lt;p&gt;That's the kind of practical, real-world automation that becomes possible when AI-assisted development makes it trivial to port sophisticated GPU-accelerated software to the hardware that can actually run it where it's needed.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post documents work performed on an NVIDIA Jetson Orin running JetPack 6.2.2 (L4T R36.5.0, CUDA 12.6) using VS Code with GitHub Copilot powered by Claude Opus 4.6. The resulting pull request is &lt;a href="https://github.com/ahmetoner/whisper-asr-webservice/pull/364" rel="noopener noreferrer"&gt;#364&lt;/a&gt; on the whisper-asr-webservice repository. A pre-built container image is available on Docker Hub at &lt;code&gt;toolboc/whisper-asr-webservice-jetson:jp6.1-cu12.6-py3.10&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>jetson</category>
      <category>nvidia</category>
      <category>docker</category>
      <category>whisper</category>
    </item>
    <item>
      <title>#JulyOT - Building the Intelligent Edge with Jetson and Azure ft. the NVIDIA Embedded team</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Mon, 13 Jul 2020 20:06:41 +0000</pubDate>
      <link>https://forem.com/azure/julyot-building-the-intelligent-edge-with-jetson-and-azure-ft-the-nvidia-embedded-team-12ni</link>
      <guid>https://forem.com/azure/julyot-building-the-intelligent-edge-with-jetson-and-azure-ft-the-nvidia-embedded-team-12ni</guid>
      <description>&lt;p&gt;Erik St. Martin and Paul DeCarlo from the Microsoft Cloud Advocacy team meet with team members from NVIDA to discuss “Building the Intelligent Edge with Jetson and Azure”.  We will be discussing a number of topics related to this theme and will supplement with questions from attendees on the stream.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NVIDIA team members include:&lt;/strong&gt;&lt;br&gt;
 Chintan Shah, Product Manager&lt;br&gt;
 Tenika Versey, Marketing Lead for AI&lt;br&gt;
 Ryan Huff, Business Development for AI | Edge | IoT&lt;br&gt;
 Jaime Flores, AI Developer Relations Manager&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intelligent Video Analytics Github Repository:  &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;DeepStream Developer Guide:
&lt;a href="http://aka.ms/deepstreamdevguide"&gt;http://aka.ms/deepstreamdevguide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jetson Community Projects: &lt;a href="https://developer.nvidia.com/embedded/community/jetson-projects"&gt;https://developer.nvidia.com/embedded/community/jetson-projects&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jetson Community Resources: &lt;a href="https://developer.nvidia.com/embedded/community/resources"&gt;https://developer.nvidia.com/embedded/community/resources&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jetson on Azure: 
&lt;a href="http://aka.ms/jetson-on-azure"&gt;http://aka.ms/jetson-on-azure&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - Visualizing Object Detection Data in Near Real-Time with PowerBI</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Fri, 10 Jul 2020 14:24:11 +0000</pubDate>
      <link>https://forem.com/azure/julyot-visualizing-object-detection-data-in-near-real-time-with-powerbi-1ecp</link>
      <guid>https://forem.com/azure/julyot-visualizing-object-detection-data-in-near-real-time-with-powerbi-1ecp</guid>
      <description>&lt;p&gt;Erik and Paul forward object detection results from an Azure Stream Analytics Job into a PowerBI Dataset.  They then develop a custom report for visualizing data in a live PowerBI Dashboard.&lt;/p&gt;

&lt;p&gt;Part 5 of a 5 part series created for #JulyOT - more details @ &lt;a href="http://julyot.com"&gt;http://julyot.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full steps to reproduce this project can be found on github @ &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information on the services employed, check out:&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/stream-analytics/?WT.mc_id=julyot-devto-cxa"&gt;Stream Analytics Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/power-bi/?WT.mc_id=julyot-devto-cxa"&gt;PowerBI Documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - Consuming and Modeling Object Detection Data with Azure Time Series Insights</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Thu, 09 Jul 2020 17:04:17 +0000</pubDate>
      <link>https://forem.com/azure/consuming-and-modeling-object-detection-data-with-azure-time-series-insights-lm</link>
      <guid>https://forem.com/azure/consuming-and-modeling-object-detection-data-with-azure-time-series-insights-lm</guid>
      <description>&lt;p&gt;Erik and Paul forward object detections to Time Series Insights from  an Azure Stream Analytics on the Edge Job.  We showcase the whole process of setting up a TSI event source to modeling and exporting of the data within Time Series Insights.&lt;/p&gt;

&lt;p&gt;Part 4 of a 5 part series created for #JulyOT - more details @ &lt;a href="http://julyot.com"&gt;http://julyot.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full steps to reproduce this project can be found on github @ &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information on the services employed, check out:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge?WT.mc_id=julyot-devto-cxa"&gt;Stream Analytics Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/time-series-insights/?WT.mc_id=julyot-devto-cxa"&gt;Time Series Insights Documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - Develop and deploy Custom Object Detection Models with IoT Edge DeepSteam SDK Module</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Wed, 08 Jul 2020 14:20:00 +0000</pubDate>
      <link>https://forem.com/azure/julyot-develop-and-deploy-custom-object-detection-models-with-iot-edge-deepsteam-sdk-module-450</link>
      <guid>https://forem.com/azure/julyot-develop-and-deploy-custom-object-detection-models-with-iot-edge-deepsteam-sdk-module-450</guid>
      <description>&lt;p&gt;Erik and Paul demonstrate how to leverage a model developed with CustomVisionAI for use with the IoT Edge DeepSteam SDK Module and then explore how to employ a Custom Yolo Parser for use with YoloV3 and YoloV3 Tiny.&lt;/p&gt;

&lt;p&gt;Part 3 of a 5 part series created for #JulyOT - more details @ &lt;a href="http://julyot.com"&gt;http://julyot.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full steps to reproduce this project can be found on github @ &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information on the services employed, check out:&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/?WT.mc_id=julyot-devto-cxa"&gt;Cognitive Services Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/?WT.mc_id=julyot-devto-cxa"&gt;Custom Vision Service Docs:&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/azure/introduction-to-the-azure-iot-edge-camera-tagging-module-di8"&gt;CameraTaggingModule&lt;/a&gt;&lt;br&gt;
&lt;a href="https://pjreddie.com/darknet/yolo"&gt;Yolo Object Detection&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - Configure and Deploy "Intelligent Video Analytics" to IoT Edge Runtime on NVIDIA Jetson</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Tue, 07 Jul 2020 15:31:43 +0000</pubDate>
      <link>https://forem.com/azure/julyot-configure-and-deploy-intelligent-video-analytics-to-iot-edge-runtime-on-nvidia-jetson-1nc6</link>
      <guid>https://forem.com/azure/julyot-configure-and-deploy-intelligent-video-analytics-to-iot-edge-runtime-on-nvidia-jetson-1nc6</guid>
      <description>&lt;p&gt;Erik and Paul create an Azure IoT Hub and backing storage account in Microsoft Azure, then instrument a Jetson Nano with IoT Edge and configure it with a deployment that uses these services to accommodate an intelligent video analytics solution.&lt;/p&gt;

&lt;p&gt;Part 2 of a 5 part series created for #JulyOT - more details @ &lt;a href="http://julyot.com"&gt;http://julyot.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full steps to reproduce this project can be found on github @ &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information on the services employed, check out:&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/?WT.mc_id=julyot-devto-cxa"&gt;IoT Edge Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/iot-hub/?WT.mc_id=julyot-devto-cxa"&gt;IoT Hub Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge?WT.mc_id=julyot-devto-cxa"&gt;Stream Analytics Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob?WT.mc_id=julyot-devto-cxa"&gt;IoT Blob Storage Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Camera used in demo: Foscam FI9821P, though any RTSP based camera should suffice.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - Getting Started with NVIDIA Jetson Nano: Object Detection</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Mon, 06 Jul 2020 19:05:15 +0000</pubDate>
      <link>https://forem.com/azure/julyot-getting-started-with-nvidia-jetson-nano-object-detection-3moe</link>
      <guid>https://forem.com/azure/julyot-getting-started-with-nvidia-jetson-nano-object-detection-3moe</guid>
      <description>&lt;p&gt;Erik and Paul configure a Jetson Nano device for use with DeepStream SDK using samples provided from NVIDIA.&lt;/p&gt;

&lt;p&gt;Part 1 of a 5 part series created for #JulyOT - more details @ &lt;a href="http://julyot.com"&gt;http://julyot.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full steps to reproduce this project can be found on github @ &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information on the services employed, check out:&lt;br&gt;
&lt;a href="http://aka.ms/deepstreamdevguide"&gt;http://aka.ms/deepstreamdevguide&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/leandromoreira/digital_video_introduction"&gt;https://github.com/leandromoreira/digital_video_introduction&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/leandromoreira/ffmpeg-libav-tutorial"&gt;https://github.com/leandromoreira/ffmpeg-libav-tutorial&lt;/a&gt;&lt;br&gt;
&lt;a href="http://dranger.com/ffmpeg"&gt;http://dranger.com/ffmpeg&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - Artificial Intelligence at the Edge</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Thu, 02 Jul 2020 16:18:55 +0000</pubDate>
      <link>https://forem.com/azure/julyot-artificial-intelligence-at-the-edge-4ogn</link>
      <guid>https://forem.com/azure/julyot-artificial-intelligence-at-the-edge-4ogn</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eGHOMj3Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1gi4b90vwykgie8t2uk9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eGHOMj3Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1gi4b90vwykgie8t2uk9.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;#JulyOT is here!  We are calling on all makers, students, tinkerers, hardware hackers, and professional IoT developers to bring their creativity to create an IoT focused project in the month of July.  We have summarized the goals of #JulyOT and how you can get involved in this &lt;a href="https://dev.to/azure/julyot-a-month-dedicated-to-learning-and-building-iot-projects-44c0"&gt;previous post on dev.to&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Today's #JulyOT post covers some of the updates that you can expect for the week of July 6 - 10.  Our theme for this week is "Artificial Intelligence at the Edge" and we have partnered with our friends at NVIDIA to bring you &lt;a href="https://www.youtube.com/playlist?list=PLzgEG9tLG-1QLc-DPPABoW1YWFMPNQl4t"&gt;over 8 hours of content&lt;/a&gt; that will teach almost EVERYTHING that you need to know to develop AIOT (Artificial Intelligence of Things) solutions.  This will culminate in a livestream event with developers from NVIDIA on July 10 at 1 PM CST on the &lt;a href="https://www.twitch.tv/microsoftdeveloper"&gt;Microsoft Developer Twitch Channel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Throughout the week, we will roll-out individual posts across social media, but you can also get ahead of the curve by heading to the official #JulyOT content repository at &lt;a href="http://julyot.com"&gt;http://julyot.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, what's in the repository this week and what topics will be covered on the livestream?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/azure/julyot-intelligent-home-security-with-nvidia-jetson-440m"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UZLeZ_W9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zt5wlngg66r7jdh69grx.PNG" alt="Intelligent Video Analytics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The majority of content will focus on a &lt;a href="https://www.youtube.com/playlist?list=PLzgEG9tLG-1QLc-DPPABoW1YWFMPNQl4t"&gt;5 part video series&lt;/a&gt; that was developed to accompany an open source project titled &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;"Intelligent Video Analytics with NVIDIA Jetson and Microsoft Azure"&lt;/a&gt;.  In a nutshell, this project will teach you how to develop a custom end-to-end Intelligent Video Analytics pipeline with multiple video sources, and includes steps on how to train your own object detection model to detect whatever you fancy! The modules are listed below and will give you a quick overview on what you can expect to learn in each section. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/blob/master/docs/Module%201%20-%20Introduction%20to%20NVIDIA%20DeepStream.md"&gt;Module 1 - Introduction to NVIDIA DeepStream&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/blob/master/docs/Module%202%20-%20Configure%20and%20Deploy%20Intelligent%20Video%20Analytics%20to%20IoT%20Edge%20Runtime%20on%20NVIDIA%20Jetson.md"&gt;Module 2 - Configure and Deploy "Intelligent Video Analytics" to IoT Edge Runtime on NVIDIA Jetson&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/blob/master/docs/Module%203%20-%20Develop%20and%20deploy%20Custom%20Object%20Detection%20Models%20with%20IoT%20Edge%20DeepStream%20SDK%20Module.md"&gt;Module 3 - Develop and deploy Custom Object Detection Models with IoT Edge DeepStream SDK Module&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/blob/master/docs/Module%204%20-%20Filtering%20Telemetry%20with%20Azure%20Stream%20Analytics%20at%20the%20Edge%20and%20Modeling%20with%20Azure%20Time%20Series%20Insights.md"&gt;Module 4 - Filtering Telemetry with Azure Stream Analytics at the Edge and Modeling with Azure Time Series Insights&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/blob/master/docs/Module%205%20-%20Visualizing%20Object%20Detection%20Data%20in%20Near%20Real-Time%20with%20PowerBI.md"&gt;Module 5 - Visualizing Object Detection Data in Near Real-Time with PowerBI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z2kMXitm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s074iogwwt8i6lptzbco.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z2kMXitm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s074iogwwt8i6lptzbco.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each of these modules is accompanied by a livestream that was recorded with &lt;a href="https://www.twitch.tv/erikdotdev"&gt;@ErikDotDev on Twitch&lt;/a&gt;.  This is interesting because, Erik was brand new to the NVIDIA embedded platform and applied Artificial Intelligence.  We have catalogued his journey in these recordings and hope that it can serve to inspire others to follow a similar path!  That's right, in ~8 hours of video content, we feel that you too can become an expert in building custom object detection pipelines for use in a wide variety of scenarios.  &lt;/p&gt;

&lt;p&gt;Now, what better way to endcap this journey than by being able to ask questions live to the NVIDIA developers who have worked to create the platform and tools that make all of this possible!  On July 10 at 1PM CST, we will be joined by a group of developers who will be on hand to answer your questions about anything related to NVIDIA embedded devices, the DeepStream SDK, and the various AI workloads that are supported by their hardware.  Be sure to mark your calendars for this event and tune in at the appropriate time at the &lt;a href="https://www.twitch.tv/microsoftdeveloper"&gt;Microsoft Developer Twitch Channel&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;If you have any burning questions for the NVIDIA team, please leave them in the comments below.  We will review them and just might ask them on the air!  Here are a couple example questions that we plan to cover on the stream:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;During the development of our Custom Model, the topic of acceleration hardware came up where we identified GPUs, FPGAs, and ASICs as mechanisms for inference acceleration.&lt;br&gt;&lt;br&gt;
These all work on different principles but we want to hear from NVIDIA, how does a GPU actually accelerate AI workloads?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We mostly looked at the acceleration of computer vision workloads, what other types of AI inferencing can be done with NVIDIA hardware?  We understand that audio features can also benefit, do you have any examples of this?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As always, thank you for reading and we hope to see you there on July 10 for the livestream event with NVIDIA developers on the &lt;a href="https://www.twitch.tv/microsoftdeveloper"&gt;Microsoft Developer Twitch Channel&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>embedded</category>
    </item>
    <item>
      <title>#JulyOT - Intelligent Home Security with NVIDIA Jetson</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Wed, 01 Jul 2020 16:51:07 +0000</pubDate>
      <link>https://forem.com/azure/julyot-intelligent-home-security-with-nvidia-jetson-440m</link>
      <guid>https://forem.com/azure/julyot-intelligent-home-security-with-nvidia-jetson-440m</guid>
      <description>&lt;p&gt;Paul DeCarlo demonstrates an Intelligent Home Security System built using the Jetson Xavier NX with Azure IoT Edge and the DeepStream SDK module to run a custom object detection model built with Custom Vision.AI.  Object detection telemetry is pushed into Azure Time Series Insights and PowerBI to allow for analyzing these detections over time and in near real-time.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Github repo to reproduce this project: &lt;br&gt;
&lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure"&gt;https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This project was created as part of #JulyOT, more details are available at: &lt;a href="http://aka.ms/julyiot"&gt;http://aka.ms/julyiot&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information about the services used in this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jetson Xavier NX:  &lt;a href="https://nvda.ws/3bqcNEx"&gt;https://nvda.ws/3bqcNEx&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Azure IoT Edge: &lt;a href="http://aka.ms/iotedgestart"&gt;http://aka.ms/iotedgestart&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;NVIDIA DeepStream SDK Module: &lt;a href="http://aka.ms/deepstreamsdkmodule"&gt;http://aka.ms/deepstreamsdkmodule&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Azure Time Series Insights: &lt;a href="https://aka.ms/timeseriesinsightsstart"&gt;https://aka.ms/timeseriesinsightsstart&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Power BI: &lt;a href="https://aka.ms/powerbistart"&gt;https://aka.ms/powerbistart&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>nvidia</category>
      <category>iot</category>
    </item>
    <item>
      <title>#JulyOT - A month dedicated to learning and building IoT Projects</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Wed, 01 Jul 2020 14:28:48 +0000</pubDate>
      <link>https://forem.com/azure/julyot-a-month-dedicated-to-learning-and-building-iot-projects-44c0</link>
      <guid>https://forem.com/azure/julyot-a-month-dedicated-to-learning-and-building-iot-projects-44c0</guid>
      <description>&lt;p&gt;Attention all makers, students, tinkerers, hardware hackers, and professional IoT developers!  The month of #JulyOT is here!  To celebrate, we have curated a &lt;a href="http://aka.ms/julyot"&gt;collection of  content&lt;/a&gt; - blog posts, hands-on-labs, and videos designed to demonstrate and teach developers how to build projects with Azure Internet of Things (IoT) services.   This content ranges from video demonstrations of real-world solutions, a &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/"&gt;5-part series on developing an Intelligent Video Analytics service&lt;/a&gt;, an &lt;a href="https://github.com/jimbobbennett/smart-garden-ornaments"&gt;IoT bird-box project for families&lt;/a&gt;,  &lt;a href="https://www.youtube.com/watch?v=ayIrNB8gh68"&gt;Raspberry Pi air quality monitor backed by Azure IoT Central&lt;/a&gt;, and a &lt;a href="https://docs.microsoft.com/en-us/learn/certifications/exams/az-220"&gt;self-guided training series designed to help study for the Azure 220 IoT Developer certification&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;All of this content will be made available on this &lt;a href="http://aka.ms/julyot"&gt;IoT Tech Community page&lt;/a&gt;.  It is highly recommended that you bookmark &lt;a href="http://aka.ms/julyot"&gt;this page&lt;/a&gt; as we will update it with new content each week during the month of #JulyOT!  In this post, we would like to give a rundown on what you can expect from #JulyOT and how you can contribute to the fun!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is #JulyOT?
&lt;/h2&gt;

&lt;p&gt;JulyOT was born out of a desire to celebrate all things IoT in an accessible way to enable developers around the world to build innovative projects with connected devices.  Do you have an idea can be brought to life by connecting a device to the internet?  Are you interested in exploring &lt;a href="https://microsoft.github.io/ai-at-edge/"&gt;Artificial Intelligence at the Edge&lt;/a&gt; or building something innovative with sensors and hardware?  We want to hear about it!  We will be monitoring the #JulyOT hashtag to actively promote and encourage projects built by the community throughout the month of July.   All you need to do is have an idea for an IoT project, then tweet it out and keep us updated on progress by including the #JulyOT hashtag.   We’re open to all projects that involve a thing and the internet, so don’t feel afraid to share, we want to inspire everyone to build something cool in July!&lt;/p&gt;

&lt;p&gt;Here are a few examples:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/jimbobbennett/status/1262914463335735296"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ijjS2uwd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i6pz2c6smodhav24oh4n.jpg" alt="Alt Text" width="740" height="486"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/pjdecarlo/status/1272639259384647683"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MMPEGTzx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b2cpz7qmh6u25zkvn927.jpg" alt="Alt Text" width="733" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  JulyOT Themes
&lt;/h2&gt;

&lt;p&gt;During each week of #JulyOT, we will have a focused theme that will combine related pieces. Each of these themes will be accompanied by a project built by the &lt;a href="https://developer.microsoft.com/en-us/advocates/"&gt;IoT Cloud Advocacy team at Microsoft&lt;/a&gt; that you can build yourself or remix into something brand new!  You don’t have to follow a theme to be a part of #JulyOT, but it may be of interest if you have an idea that falls into one of these topics.&lt;/p&gt;

&lt;h4&gt;
  
  
  July 1 – 3 : #JulyOT Content Kickoff
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HVgdmCA3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qjxk0ji24ja51vdfw5cq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HVgdmCA3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qjxk0ji24ja51vdfw5cq.PNG" alt="Alt Text" width="638" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first week will begin with primer content intended to introduce the concept of IoT to interested contributors.  We will also “tease” out some of the projects that will be released later in the month.  This is the time for contributors to begin thinking about what they wish to create and a great way to learn about real-world applications in the realm of IoT.  It is highly suggested to author your Tweets during the Content Kickoff period and we’ll be in touch throughout the month to check in on your project’s progress!  &lt;/p&gt;

&lt;h4&gt;
  
  
  July 6 – 10 :  Artificial Intelligence at the Edge
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZMUPAD_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/aqjfip4hg6mumij6wzrx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZMUPAD_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/aqjfip4hg6mumij6wzrx.jpg" alt="Alt Text" width="880" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second week will focus on &lt;a href="https://microsoft.github.io/ai-at-edge/"&gt;AI @ Edge solutions&lt;/a&gt; that make use of techniques like computer vision to produce intelligent solutions that run locally on embedded hardware.  &lt;a href="https://twitter.com/pjdecarlo"&gt;Paul DeCarlo&lt;/a&gt; has provided a themed project that consists of a &lt;a href="https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/blob/master/docs/Module%205%20-%20Visualizing%20Object%20Detection%20Data%20in%20Near%20Real-Time%20with%20PowerBI.md"&gt;5-part series on developing an Intelligent Video Analytics service&lt;/a&gt;, accompanied by &lt;a href="https://www.youtube.com/watch?v=yZz-4uOx_Js&amp;amp;list=PLzgEG9tLG-1QLc-DPPABoW1YWFMPNQl4t"&gt;recorded livestreams&lt;/a&gt; to guide you through the development process.  This content will show you how to build an end-to-end architecture for custom object detection using NVIDIA hardware with Microsoft Azure.  The week will culminate with a &lt;a href="https://www.twitch.tv/microsoftdeveloper"&gt;livestream&lt;/a&gt; on July 10 that will feature developers from NVIDIA.  Come ready to ask questions as we will be answering them live on the air! &lt;/p&gt;

&lt;h4&gt;
  
  
  July 13 – 17 :  Maker Community and Academic
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tH5wPHyB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x7n20d4guii65yz0tr64.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tH5wPHyB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x7n20d4guii65yz0tr64.jpg" alt="Alt Text" width="539" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The third week is all about empowering makers and students to build IoT projects, particularly with a focus on teaching others.  We want to extend the subject of IoT to students and possibly family members with projects that are fun and easy to get started with.  &lt;a href="https://twitter.com/jimbobbennett"&gt;Jim Bennett&lt;/a&gt; has created a relatively inexpensive project that teaches you how to make &lt;a href="https://github.com/jimbobbennett/smart-garden-ornaments"&gt;smart garden ornaments&lt;/a&gt;.  This is a fun week long project for those who want to take their first steps into the world of the Internet of Things (IoT) using devices that are popular with kids and tools that make programming accessible to young developers. You'll use a Raspberry Pi along with some &lt;a href="https://microbit.org/"&gt;BBC micro:bits&lt;/a&gt; and &lt;a href="https://makecode.microbit.org/"&gt;Microsoft MakeCode&lt;/a&gt;, and any garden ornaments you have to hand to build a smart neighborhood, gathering data such as temperature and soil moisture levels and displaying it in the cloud using &lt;a href="https://azure.microsoft.com/services/iot-central/?WT.mc_id=julyot-devto-jabenn"&gt;Azure IoT Central&lt;/a&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  July 20 – 24 :  Microcontrollers and Embedded Hardware
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xo0Ya8z9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8wup8987ctav9ww19xse.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xo0Ya8z9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8wup8987ctav9ww19xse.jpg" alt="Alt Text" width="740" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fourth week will cover a variety of projects built around microcontrollers (for example &lt;a href="https://azure.microsoft.com/en-us/services/azure-sphere/?WT.mc_id=julyot-devto-pdecarlo"&gt;Azure Sphere&lt;/a&gt; and embedded hardware like the Raspberry Pi.  This also includes a focus on real-time operating systems that run on these devices.  &lt;a href="https://twitter.com/dglover"&gt;Dave Glover&lt;/a&gt; has produced content that covers how to build a &lt;a href="https://www.youtube.com/watch?v=ayIrNB8gh68"&gt;Raspberry Pi air quality monitor&lt;/a&gt;, &lt;a href="https://github.com/gloveboxes/Azure-Sphere-with-Azure-RTOS-integration"&gt;Azure Sphere RTOS integration&lt;/a&gt;, and a full &lt;a href="https://github.com/gloveboxes/Azure-Sphere-Learning-Path"&gt;learning path focused on Azure Sphere Development&lt;/a&gt;.  Learn how to not only built intelligent devices and apply custom sensors but also discover how to connect your devices to cloud services for command + control and reporting! &lt;/p&gt;

&lt;h4&gt;
  
  
  July 27 – 31 :  Online learning and Certification
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pdvHf75_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d3z5evsq1wbdoegd66by.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pdvHf75_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d3z5evsq1wbdoegd66by.jpg" alt="Alt Text" width="740" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final week of #JulyOT will focus on free learning resources like &lt;a href="https://docs.microsoft.com/en-us/learn/certifications/exams/az-220?WT.mc_id=julyot-devto-pdecarlo"&gt;Microsoft Learn&lt;/a&gt; and other materials to help prepare for the &lt;a href="https://docs.microsoft.com/en-us/learn/certifications/exams/az-220"&gt;Azure 220 IoT Developer certification&lt;/a&gt;.  &lt;a href="https://twitter.com/ThomasMaurer"&gt;Thomas Maurer&lt;/a&gt; has written an extensive &lt;a href="https://www.thomasmaurer.ch/2020/04/az-220-study-guide-microsoft-azure-iot-developer/"&gt;study guide&lt;/a&gt; that we have augmented with live videos from &lt;a href="https://aka.ms/OCPStudyGroup-IoT"&gt;partner training resources&lt;/a&gt; provided by &lt;a href="https://www.linkedin.com/in/utahitpro/"&gt;Diana Phillips&lt;/a&gt;.  We hope that many of you will take the learnings that you have made throughout #JulyOT and leverage the experience to achieve an official designation as a certified IoT Developer!  &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;It is our hope that we can inspire many of you to pursue a personal project in the IoT space during #JulyOT.  Throughout the month, we will focus deeper on themed content in addition to  projects that are created by you!  There is never a bad time to jump into the fun of creating your own project, but we hope that you can build with us during this special time of year!  If you feel inspired, let us know with a Tweet and don’t forget the #JulyOT hashtag, we can’t wait to see what you will build! &lt;/p&gt;

</description>
      <category>iot</category>
      <category>maker</category>
      <category>microcontroller</category>
    </item>
    <item>
      <title>Introduction to the Azure IoT Edge Camera Tagging Module</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Mon, 27 Apr 2020 13:45:32 +0000</pubDate>
      <link>https://forem.com/azure/introduction-to-the-azure-iot-edge-camera-tagging-module-di8</link>
      <guid>https://forem.com/azure/introduction-to-the-azure-iot-edge-camera-tagging-module-di8</guid>
      <description>&lt;p&gt;In this post we will introduce the &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging" rel="noopener noreferrer"&gt;Azure IoT Edge Camera Tagging Module&lt;/a&gt;.  This module will deploy a service onto compatible IoT devices to capture images from live RTSP video streams.  These images can then be uploaded from the module into &lt;a href="https://azure.microsoft.com/en-us/services/cognitive-services/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Microsoft Cognitive Services&lt;/a&gt; for use in training object detection and image classification models, or into &lt;a href="https://azure.microsoft.com/services/storage/blobs/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Azure Blob Storage&lt;/a&gt; for archival purposes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;The combination of Artificial Intelligence and Internet of Things yields a paradigm commonly referred to as the "Artificial Intelligence of Things" or AIoT.  Development in this field is focused around augmentation of common sensors like microphones and cameras which take their traditional output and use them as algorithmic inputs that are processed by AI models.  This can enable abilities that include transforming spoken words into contextual requests that can be operated on by a computer program or determining the presence and location of objects in a video feed.&lt;/p&gt;

&lt;p&gt;The development of models for use in AIoT applications often involves gathering  quantities of training data to use as inputs that an AI model can learn from.  As you might expect, the quality and relevance of the data involved in the training of these models can have a large impact on their accuracy in production.  There are a number of ways to approach this, for example using vast training sets that consist of millions or even billions of samples to produce a generic detector for a variety of unique inputs.  Another method might involve using contextually relevant data, for example taking samples at the site of deployment and using those to create a tightly tuned model that can handle a specific environment.  &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging" rel="noopener noreferrer"&gt;Azure IoT Edge Camera Tagging Module&lt;/a&gt; can assist in both of these strategies by allowing you to capture sample data for vision based AI models at scale.  It can also allow you to capture data remotely from the site of deployment.  This enables solution builders to produce varied and precise AI models using data gathered from a module running on any &lt;a href="https://azure.microsoft.com/en-us/services/iot-edge/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;IoT Edge&lt;/a&gt; capable device.&lt;/p&gt;

&lt;p&gt;The content will assume some prior knowledge of Azure IoT Edge and it is suggested that you are familiar with &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;installing the IoT Edge Runtime on Linux devices&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-develop-for-linux?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;how to create a deployment for a Linux-Based IoT Edge Device&lt;/a&gt;.  If you would like to learn more about Azure IoT Edge and are looking for a good place to learn, I highly recommend &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;the official IoT Edge Documentation&lt;/a&gt; and the &lt;a href="https://docs.microsoft.com/en-us/learn/browse/?term=iot%20edge&amp;amp;WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;interactive courses on IoT Edge at Microsoft Learn&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;Before we begin, the following is assumed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have an active &lt;a href="https://azure.microsoft.com/en-us/free/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Azure Subscription&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You have followed &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;steps to install the Azure IoT Edge Runtime to your Linux- based ARM32, ARM64, or AMD64 target device&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You have followed the &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-register-device?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;steps to register your IoT Edge Device to an Azure IoT Hub instance&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You are familiar with the &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-deploy-modules-vscode" rel="noopener noreferrer"&gt;steps to deploy modules to an IoT Edge Device from Visual Studio Code&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You have installed the &lt;a href="https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Azure IoT Edge Extension for Visual Studio Code&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging" rel="noopener noreferrer"&gt;Azure IoT Edge Camera Tagging Module&lt;/a&gt; is under active development on Github.  A quick perusal of the official README mentions that the module supports ARM32, ARM64, and AMD64 Linux platforms.  The project is distributed in a fashion that encourages building the module from source and publishing it to your own private docker registry.  This is the preferred approach if you plan to use the module in production, but it does require some additional steps and overhead.  &lt;/p&gt;

&lt;p&gt;To make test-driving a bit easier, I have published the Camera Tagging module for all three supported architectures in a &lt;a href="https://hub.docker.com/r/toolboc/camerataggingmodule/tags" rel="noopener noreferrer"&gt;public DockerHub repo&lt;/a&gt; using a &lt;a href="https://dev.to/toolboc/publish-multi-arch-docker-images-to-a-single-repo-with-docker-manifests-329n"&gt;Docker manifest&lt;/a&gt; to allow you to easily install the module regardless of the target platform with a reference the following image tag: &lt;code&gt;toolboc/camerataggingmodule:latest&lt;/code&gt;.  This multi-platform image will be referenced in our &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-deploy-modules-vscode" rel="noopener noreferrer"&gt;deployment.template.json&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a new IoT Edge Solution and update the deployment.template.json
&lt;/h1&gt;

&lt;p&gt;Start by opening Visual Studio Code and use the shortcut (CTRL+SHIFT+P) and search for the &lt;strong&gt;Azure IoT Edge: New IoT Edge Solution&lt;/strong&gt; task:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5t9uv4m1olouc3z4575p.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5t9uv4m1olouc3z4575p.PNG" alt="New IoT Edge Solution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose an appropriate directory to create your project under, next you will be asked to give your solution a name.  It is suggested to name it something like &lt;strong&gt;CameraTaggingModuleExample&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After your solution is named, you will be prompted to &lt;strong&gt;Select Module Template&lt;/strong&gt;.  Select the &lt;strong&gt;Empty Solution&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faa9v4n97j2vvkz1spx5r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faa9v4n97j2vvkz1spx5r.PNG" alt="Empty Solution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will produce a base project with the minimal scaffolding to produce an IoT Edge deployment.&lt;/p&gt;

&lt;p&gt;Copy the content below and replace the current contents of the generated deployment.template.json:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "$schema-template": "1.0.0",
  "modulesContent": {
    "$edgeAgent": {
      "properties.desired": {
        "schemaVersion": "1.0",
        "runtime": {
          "type": "docker",
          "settings": {
            "minDockerVersion": "v1.25",
            "loggingOptions": "",
            "registryCredentials": {
              "containerRegistry": {
                "username": "$CONTAINER_REGISTRY_USERNAME",
                "password": "$CONTAINER_REGISTRY_PASSWORD",
                "address": "$CONTAINER_REGISTRY_NAME"
              }
            }
          }
        },
        "systemModules": {
          "edgeAgent": {
            "type": "docker",
            "settings": {
              "image": "mcr.microsoft.com/azureiotedge-agent:1.0.9",
              "createOptions": {}
            }
          },
          "edgeHub": {
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "mcr.microsoft.com/azureiotedge-hub:1.0.9",
              "createOptions": {
                "HostConfig": {
                  "PortBindings": {
                    "5671/tcp": [
                      {
                        "HostPort": "5671"
                      }
                    ],
                    "8883/tcp": [
                      {
                        "HostPort": "8883"
                      }
                    ],
                    "443/tcp": [
                      {
                        "HostPort": "443"
                      }
                    ]
                  }
                }
              }
            }
          }
        },
        "modules": {
          "CameraTaggingModule": {
            "version": "1.0.3",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "toolboc/camerataggingmodule:latest",
              "createOptions": {
                "ExposedPorts": {
                  "3000/tcp": {},
                  "3002/tcp": {},
                  "3003/tcp": {}
                },
                "HostConfig": {
                  "PortBindings": {
                    "3000/tcp": [
                      {
                        "HostPort": "3000"
                      }
                    ],
                    "3002/tcp": [
                      {
                        "HostPort": "3002"
                      }
                    ],
                    "3003/tcp": [
                      {
                        "HostPort": "3003"
                      }
                    ]
                  }
                }
              }
            },
            "env": {
              "RTSP_IP": {
                "value": "wowzaec2demo.streamlock.net"
              },
              "RTSP_PORT": {
                "value": "554"
              },
              "RTSP_PATH": {
                "value": "vod/mp4:BigBuckBunny_115k.mov"
              },
              "REACT_APP_SERVER_PORT": {
                "value": "3003"
              },
              "REACT_APP_WEB_SOCKET_PORT": {
                "value": "3002"
              },
              "REACT_APP_LOCAL_STORAGE_MODULE_NAME": {
                "value": "azureblobstorageoniotedge"
              },
              "REACT_APP_LOCAL_STORAGE_PORT": {
                "value": "11002"
              },
              "REACT_APP_LOCAL_STORAGE_ACCOUNT_NAME": {
                "value": "$LOCAL_STORAGE_ACCOUNT_NAME"
              },
              "REACT_APP_LOCAL_STORAGE_ACCOUNT_KEY": {
                "value": "$LOCAL_STORAGE_ACCOUNT_KEY"
              }
            }
          },
          "azureblobstorageoniotedge": {
            "version": "1.2",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "mcr.microsoft.com/azure-blob-storage:latest",
              "createOptions": {
                "Env":[
                  "LOCAL_STORAGE_ACCOUNT_NAME=$LOCAL_STORAGE_ACCOUNT_NAME",
                  "LOCAL_STORAGE_ACCOUNT_KEY=$LOCAL_STORAGE_ACCOUNT_KEY"
                 ],
                 "HostConfig":{
                   "Binds": ["/data/containerdata:/blobroot"],
                   "PortBindings":{
                     "11002/tcp": [{"HostPort":"11002"}]
                   }
                 }
              }
            }
          }
        }
      }
    },
    "$edgeHub": {
      "properties.desired": {
        "schemaVersion": "1.0",
        "routes": {
          "azureblobstorageoniotedgeToIoTHub": "FROM /messages/modules/azureblobstorageoniotedge/outputs/* INTO $upstream"
        },
        "storeAndForwardConfiguration": {
          "timeToLiveSecs": 7200
        }
      }
    },
    "azureblobstorageoniotedge":{
      "properties.desired": {
        "deviceAutoDeleteProperties": {
          "deleteOn": false,
          "retainWhileUploading": true
        },
        "deviceToCloudUploadProperties": {
          "uploadOn": true,
          "uploadOrder": "OldestFirst",
          "cloudStorageConnectionString": "$CLOUD_STORAGE_CONNECTION_STRING",
          "storageContainersForUpload": {
            "$LOCAL_STORAGE_ACCOUNT_NAME": {
              "target": "$DESTINATION_STORAGE_NAME"
            }
          },
          "deleteAfterUpload": true
        }
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use this deployment.template.json in upcoming steps to create a deployment manifest using tooling in Visual Studio Code.  Notice that the multi-platform image (&lt;em&gt;toolboc/camerataggingmodule:latest&lt;/em&gt;) is referenced in the deployment.template.json specification.  &lt;/p&gt;

&lt;p&gt;We also configure a default RTSP stream using the &lt;a href="https://www.wowza.com/html/mobile.html" rel="noopener noreferrer"&gt;Big Buck Bunny RTSP stream from Wowza&lt;/a&gt;.  This is the most reliable publicly accessible RTSP stream on the entire internet, and is provided as an example (trust me, reliable public RTSP streams are very difficult to find).  &lt;/p&gt;

&lt;p&gt;There is also an azureblobstorageoniotedge module that is included which can allow us to save captured images locally and replicate them to the cloud using the &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-deploy-blob?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;IoT Edge Blob Storage Module&lt;/a&gt;.  This module is useful in private networks where RTSP streams may be available, but outside internet access is not.  With this module configured, we could deploy an IoT Edge device into the environment to capture images, then retrieve it and publish the images to the cloud when outbound network access is restored to the device.  This will be covered in more detail later in the next steps. &lt;/p&gt;

&lt;h1&gt;
  
  
  Configure the Blob Storage Module
&lt;/h1&gt;

&lt;p&gt;In this step, we will configure the &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-deploy-blob?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;IoT Edge Blob Storage Module&lt;/a&gt; which can be used in conjunction with the CameraTaggingModule to store image captures locally and replicate them to the cloud.  Technically, this module is optional and the CameraTaggingModule can upload images directly to the cloud or CustomVision.AI without it, but it gives a more robust solution for the end user that can capture and store images without the need for outbound internet access.  &lt;/p&gt;

&lt;p&gt;Open the project created in the previous step and create a file named .env and supply it with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER_REGISTRY_NAME=
LOCAL_STORAGE_ACCOUNT_KEY=
LOCAL_STORAGE_ACCOUNT_NAME=camerataggingmodulelocal
DESTINATION_STORAGE_NAME=camerataggingmodulecloud
CLOUD_STORAGE_CONNECTION_STRING=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file will will store key/value that are used to replace values in deployment.template.json to produce a working deployment manifest. You will notice these entries in the deployment.template.json are preceeded with the '$' symbol.  This marks them as tokens for replacement during the generation of the deployment manifest.&lt;/p&gt;

&lt;p&gt;For now, we will skip the &lt;code&gt;CONTAINER_REGISTRY_NAME&lt;/code&gt; as that is only needed if you are pulling container images from a private repository, since the modules in our deployment are all publicly available, it is not needed at this time.&lt;/p&gt;

&lt;p&gt;Produce a value for &lt;code&gt;LOCAL_STORAGE_ACCOUNT_KEY&lt;/code&gt; by visiting &lt;a href="https://generate.plus/en/base64" rel="noopener noreferrer"&gt;GeneratePlus&lt;/a&gt;.  This will generate a random base64 encoded string that will be used to configure a secure connection to the local blob storage instance.  You will want to supply the entire result, which should end with two equal signs (&lt;em&gt;==&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;LOCAL_STORAGE_ACCOUNT_NAME&lt;/code&gt; is best left as-is, but you are welcome to rename it, provided that it follows the format for naming: The field can contain only lowercase letters and numbers and the name must be between 3 and 24 characters.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DESTINATION_STORAGE_NAME&lt;/code&gt; is supplied from an asssumed-to-exist blob storage container in the Azure Cloud.  You can create this container by performing the following steps:&lt;/p&gt;

&lt;p&gt;Navigate to the Azure Marketplace and search for 'blob', then select &lt;strong&gt;Storage Account - blob, file, table, queue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsztvtcvg4w0lxgrhxvkc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsztvtcvg4w0lxgrhxvkc.PNG" alt="Azure Marketplace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the Storage Account using settings similar to below (note: the &lt;strong&gt;Storage account name&lt;/strong&gt; must be globally unique)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdrcgy23hxjrhj38vamgz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdrcgy23hxjrhj38vamgz.PNG" alt="Create Storage Account"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Review + Create&lt;/strong&gt; =&amp;gt; &lt;strong&gt;Create&lt;/strong&gt; to deploy the new Storage Account Resource.&lt;/p&gt;

&lt;p&gt;Navigate to your newly deployed Storage Account and select &lt;strong&gt;Containers&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw8976n6208bmoc8ubpz8.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw8976n6208bmoc8ubpz8.PNG" alt="Storage Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new storage container named "camerataggingmodulecloud" as shown below (the name is important as it matches the value in the .env):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhscgl6xe3va4wc6328np.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhscgl6xe3va4wc6328np.PNG" alt="New Container"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CLOUD_STORAGE_CONNECTION_STRING can be obtained by visiting your newly created Storage Account and selecting &lt;strong&gt;Settings&lt;/strong&gt; =&amp;gt; &lt;strong&gt;Access Keys&lt;/strong&gt;.  Copy the entire contents of the &lt;strong&gt;Connection string&lt;/strong&gt; and supply this as the value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foyt0ngxlkqdx14439qux.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foyt0ngxlkqdx14439qux.PNG" alt="Obtain Connection String"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your completed .env file should look similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER_REGISTRY_NAME=
LOCAL_STORAGE_ACCOUNT_KEY=9LkgJa1ApIsISmuUHwonxg==
LOCAL_STORAGE_ACCOUNT_NAME=camerataggingmodulelocal
DESTINATION_STORAGE_NAME=camerataggingmodulecloud
CLOUD_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=camerataggingmodulestore;AccountKey=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000==;EndpointSuffix=core.windows.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One final and very important step, the Azure IoT Edge Blob Storage module runs as a special user with uid (1100).  This is alluded to in &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob#granting-directory-access-to-container-user-on-linux?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt; and requires some configuration on the host in order for the local blob storage to be able to write to the host directory.  I found that I had to perform this step when deploying to an AMD64 device but it did not appear to be required for AARCH64 devices, your experience may vary.  &lt;/p&gt;

&lt;p&gt;We will simply create the expected directory that is specified as a bind mount in our deployment.template.json for &lt;em&gt;**azureblobstorageoniotedge&lt;/em&gt;* and assign it owner and group permissions for the user that the IoT Edge Blob Storage module runs as.  This will allow the module to write to this directory on the host filesystem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /data
sudo mkdir /data/containerdata
sudo chown -R 11000:11000 /data/containerdata
sudo chmod -R 700 /data/containerdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are now ready to generate the deployment manifest.&lt;/p&gt;

&lt;h1&gt;
  
  
  Generate and apply the deployment to the IoT Edge device
&lt;/h1&gt;

&lt;p&gt;In this section, we will use the .env file to produce a deployment manifest and then apply it to our IoT Edge device.  This deployment will work as-is across ARM64, ARM32, and AMD64 devices as it will reference platform agnostic image tags for all of the included modules (edgeAgent, edgeHub, CameraTaggingModule, azureblobstorageoniotedge).&lt;/p&gt;

&lt;p&gt;Within Visual Studio Code, right-click the deployment.template.json file and select &lt;strong&gt;Generate IoT Edge Deployment Manifest&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F184zp8obo6o4xpwbby3a.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F184zp8obo6o4xpwbby3a.PNG" alt="Generate Manifest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will create a new folder named &lt;em&gt;config&lt;/em&gt; and within it there should be deployment manifest created and namded selected architecture.  Remember, while the name may imply that this is a platform-specific deployment, this particular deployment is platform agnostic and will work on ARM64, ARM32, or AMD64 platforms.  &lt;/p&gt;

&lt;p&gt;Right-Click this file and select &lt;strong&gt;Create Deployment for Single Device&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftzdvgejy5wvezqz7ctdu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftzdvgejy5wvezqz7ctdu.PNG" alt="Apply Deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will be prompted to select a device that has been registered to your IoT Hub.  Choose the appropriate device and the deployment will be applied.  In a few moments, your device will begin running the supplied modules (edgeAgent, edgeHub, CameraTaggingModule, azureblobstorageoniotedge).&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the Deployed CameraTaggingModule
&lt;/h1&gt;

&lt;p&gt;In this section we will explore features of the CameraTaggingModule and demonstrate how you can use it to capture images for use in training object detection and image classification models at &lt;a href="http://customvision.ai/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;CustomVision.AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The CameraTaggingModule is configured to expose a web service on port 3000 of the IoT Edge device.  You can interact with it by visiting &lt;code&gt;http://&amp;lt;devicehostname&amp;gt;:30000&lt;/code&gt; or &lt;code&gt;http://&amp;lt;deviceipaddress&amp;gt;:3000&lt;/code&gt; on a compatible web browser.  The latest version of &lt;a href="https://www.microsoft.com/en-us/edge?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Microsoft Edge&lt;/a&gt;, Firefox, and Chrome all worked for me.  &lt;/p&gt;

&lt;p&gt;Upon loading,  the interface should present you with a playback of Big Buck Bunny RTSP stream:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fme2jmtp981bz791wnd62.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fme2jmtp981bz791wnd62.PNG" alt="Camera Tagging Module"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can change the current RTSP stream by selecting &lt;strong&gt;Change Camera&lt;/strong&gt;, then supply a &lt;strong&gt;Camera Name&lt;/strong&gt; and associated &lt;strong&gt;RTSP Address&lt;/strong&gt; and &lt;strong&gt;Add Camera&lt;/strong&gt;.  Next, choose the newly added camera from the drop-down and select &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4ntujv88xqev93ng27um.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4ntujv88xqev93ng27um.PNG" alt="Change Camera"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is an example of changing the RTSP stream to one of my home security cameras:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0x93qjw57101tly8pjqn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0x93qjw57101tly8pjqn.PNG" alt="Home Camera"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can begin capturing images by selecting &lt;strong&gt;Capture&lt;/strong&gt; then you can supply it with a tag or tags and name the image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6dsw4tz76xybqy4xs5b7.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6dsw4tz76xybqy4xs5b7.PNG" alt="Capture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have collected a variety of images, they can be reviewed in the &lt;strong&gt;Images&lt;/strong&gt; panel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foni24yf8j839rjyv27w9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foni24yf8j839rjyv27w9.PNG" alt="Images"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Uploading Tagged Images to CustomVision.AI
&lt;/h1&gt;

&lt;p&gt;You will need to have an existing project set up at &lt;a href="http://customvision.ai" rel="noopener noreferrer"&gt;CustomVision.AI&lt;/a&gt;.  You may refer to the following quickstarts to create an image classification model or object detection model.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Quickstart: How to build a classifier with Custom Vision&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/get-started-build-detector?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Quickstart: How to build an object detector with Custom Vision&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within the CameraTaggingModule portal, select &lt;strong&gt;Upload Settings&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0r4h1gb47a1zx0rjnmms.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0r4h1gb47a1zx0rjnmms.PNG" alt="Upload Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, select &lt;strong&gt;Custom Vision&lt;/strong&gt; and you will be presented with the following configuration screen:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh7qpjuidoor409gaczd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh7qpjuidoor409gaczd8.png" alt="Custom Vision Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To obtain these values, navigate to your CustomVision.AI project and select the "Gear" icon.  This will present you with a screen that looks like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F91r5pxpgq75nknntcby6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F91r5pxpgq75nknntcby6.png" alt="Keys"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once these values are entered, you will be presented with a dropdown of your existing projects.  Choose the intended project you wish to upload your images to   and select &lt;strong&gt;Push to Custom Vision&lt;/strong&gt;.  After a while you should see &lt;strong&gt;Success&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Head back to your CustomVision.AI project to see your newly uploaded and tagged training images:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftfspwmyoajhdji31gl74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftfspwmyoajhdji31gl74.png" alt="Custom Vision Uploaded"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you see just how it easy it can be to use the CameraTaggingModule to gather images from site deployments.  Next, we will show how to leverage the Blob storage features to capture images locally and replicate to the cloud for long-term storage.&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the Blob Storage features of the CameraTaggingModule
&lt;/h1&gt;

&lt;p&gt;The CameraTaggingModule allows for storing image captures locally, where they can then be replicated to Azure Blob Storage, and also allows for direct upload to Azure Blob Storage (bypassing the need to store locally).  As mentioned, this is highly useful in environments that do not allow for transmission of data to the outside world. With this feature, you can capture data from air-gapped environments for archival in the cloud.&lt;/p&gt;

&lt;p&gt;Within the CameraTaggingModule portal, select &lt;strong&gt;Upload Settings&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0r4h1gb47a1zx0rjnmms.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0r4h1gb47a1zx0rjnmms.PNG" alt="Upload Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, select &lt;strong&gt;Blob Storage&lt;/strong&gt; and you will be presented with the following options:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyx2p3pgpqvlbdhe64w3o.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyx2p3pgpqvlbdhe64w3o.PNG" alt="Blob Storage Uploads"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To upload directly to blob storage, select the &lt;strong&gt;Push to Blob Storage&lt;/strong&gt; option.  You will be presented with a screen asking to supply a &lt;strong&gt;Storage Connection String&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpffeoqraegweu2uw8mm8.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpffeoqraegweu2uw8mm8.PNG" alt="Connection String Prompt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can obtain the string just as we did in previous steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkjdjytbgqxch0uidkdkv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkjdjytbgqxch0uidkdkv.PNG" alt="ConnectionString"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once supplied, you should see a drop-down under &lt;strong&gt;Container Name&lt;/strong&gt;.  Choose the storage container that was created earlier then &lt;strong&gt;Push to Blob Store&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqdxz8q1ytukklx1wnxek.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqdxz8q1ytukklx1wnxek.PNG" alt="Push to Blob Store"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can verify that your images have been uploaded to the cloud by visiting your Storage container in the Azure Portal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7pknxyew1dcq3yfw7ut2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7pknxyew1dcq3yfw7ut2.PNG" alt="Verify Upload"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Connecting to the local Blob Storage Instance with Azure Storage Explorer
&lt;/h1&gt;

&lt;p&gt;We will now show how you can connect to the local Blob Storage instance using the Azure Storage Explorer.  This will make it very easy to view the state of our local blob storage and easily perform operations on it's data.&lt;/p&gt;

&lt;p&gt;First, let's store some image data into the local blob storage.  &lt;/p&gt;

&lt;p&gt;Within the CameraTaggingModule portal, select &lt;strong&gt;Upload Settings&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0r4h1gb47a1zx0rjnmms.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0r4h1gb47a1zx0rjnmms.PNG" alt="Upload Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, select &lt;strong&gt;Blob Storage&lt;/strong&gt; and you will be presented with the following options:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyx2p3pgpqvlbdhe64w3o.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyx2p3pgpqvlbdhe64w3o.PNG" alt="Blob Storage Uploads"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To upload directly to the local blob storage, select the &lt;strong&gt;Push to Local Storage&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;Next, we will install the &lt;a href="https://azure.microsoft.com/en-us/features/storage-explorer/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Azure Storage Explorer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On my Ubuntu 18.04 machine, I accomplished this with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo snap install storage-explorer
snap connect storage-explorer:password-manager-service :password-manager-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, you can launch the Storage Explorer with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;storage-explorer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the &lt;strong&gt;Open Connect Dialog&lt;/strong&gt; icon that looks like a electrical plug and select &lt;strong&gt;Use a Connection String&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftyonri878hzi9a5s3s8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftyonri878hzi9a5s3s8c.png" alt="Use a Connection String"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter &lt;code&gt;camerataggingmodulelocal&lt;/code&gt; for the &lt;strong&gt;Display name&lt;/strong&gt; then supply a &lt;strong&gt;Connection string&lt;/strong&gt; in the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DefaultEndpointsProtocol=http;BlobEndpoint=http://localhost:11002/camerataggingmodulelocal;AccountName=camerataggingmodulelocal;AccountKey=9LkgJa1ApIsISmuUHwonxg==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you used a different value for &lt;code&gt;LOCAL_STORAGE_ACCOUNT_KEY&lt;/code&gt; in your .env file, be sure to replace it with the appropriate value as shown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fboswm86305lukd7ca8zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fboswm86305lukd7ca8zv.png" alt="Configure Connection String"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are configured appropriately, select &lt;strong&gt;Connect&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fntt13lpicty54mkjtmh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fntt13lpicty54mkjtmh6.png" alt="Connect"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, you can repeat this process and add the &lt;strong&gt;Connection String&lt;/strong&gt; used for &lt;code&gt;CLOUD_STORAGE_CONNECTION_STRING&lt;/code&gt; in your .env file to have access to the Azure Cloud Storage Instance we created earlier.&lt;/p&gt;

&lt;p&gt;Now you can explore the contents of the local and cloud blob storage containers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl5tyqttbxsr9ngumsbq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl5tyqttbxsr9ngumsbq0.png" alt="Storage Explorer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may notice that the local container is empty, if the IoT Edge device has network connectivity to Microsoft Azure, it will delete uploads based on the default desired properties configuration in deployment.template.json:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    "azureblobstorageoniotedge":{
      "properties.desired": {
        "deviceAutoDeleteProperties": {
          "deleteOn": false,
          "retainWhileUploading": true
        },
        "deviceToCloudUploadProperties": {
          "uploadOn": true,
          "uploadOrder": "OldestFirst",
          "cloudStorageConnectionString": "$CLOUD_STORAGE_CONNECTION_STRING",
          "storageContainersForUpload": {
            "$LOCAL_STORAGE_ACCOUNT_NAME": {
              "target": "$DESTINATION_STORAGE_NAME"
            }
          },
          "deleteAfterUpload": true
        }
      }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For details on how to modify these settings further, refer to &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging" rel="noopener noreferrer"&gt;the official README in the camerataggingmodule github repository&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Automating the CameraTaggingModule with DirectMethods
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods" rel="noopener noreferrer"&gt;Direct Methods&lt;/a&gt; are a feature of &lt;a href="https://docs.microsoft.com/en-us/azure/iot-hub/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Azure IoT Hubs&lt;/a&gt; that can provide remote access to local methods defined in module code in a secure manner.  The CameraTaggingModule exposes methods that include &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging#direct-methods" rel="noopener noreferrer"&gt;image capture, uploading captured images to CustomVision.AI, pushing captured images to local blob storage, and deleting all captured images&lt;/a&gt;.  By combining these elements together, we can automate an image capture process across any number of available RTSP feeds.&lt;/p&gt;

&lt;p&gt;Here is an example bash script &lt;code&gt;capture.sh&lt;/code&gt; that can trigger the execution of the &lt;code&gt;capture&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

#Configuration
iothubName=iothubname
deviceId=deviceId
#Generated with 'az iot hub generate-sas-token -n &amp;lt;iothubName&amp;gt; -du 31536000'
SharedAccessSignature='SharedAccessSignature sr=iothubname.azure-devices.net&amp;amp;sig=x&amp;amp;se=x&amp;amp;skn=iothubowner'

usage(){
  echo "***Camera Tagging Module Capture Script***"
  echo "Usage: ./capture.sh &amp;lt;rtsp_ip&amp;gt; &amp;lt;rtsp_port&amp;gt; &amp;lt;rtsp_path&amp;gt;"
}

capture(){
curl -X POST \
  https://$iothubName.azure-devices.net/twins/$deviceId/modules/CameraTaggingModule/methods?api-version=2018-06-30 \
  -H "Authorization: $SharedAccessSignature" \
  -H 'Content-Type: application/json' \
  -d "{
    \"methodName\": \"capture\",
    \"responseTimeoutInSeconds\": 200,
    \"payload\": {
        \"RTSP_IP\":\"$rtsp_ip\",
        \"RTSP_PORT\":\"$rtsp_port\",
        \"RTSP_PATH\":\"$rtsp_path\",
        \"TAGS\":[\"automatedCaptures\"]
    }
  }"
}


# Arguments
rtsp_ip=$1
rtsp_port=$2
rtsp_path=$3

# Check Arguments
[ "$#" -ne 3 ] &amp;amp;&amp;amp; { usage &amp;amp;&amp;amp; exit 1; } || capture
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is an example bash script &lt;code&gt;push.sh&lt;/code&gt; that can trigger pushing to the local blob storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

#Configuration
iothubName=iothubName
deviceId=deviceId
#Generated with 'az iot hub generate-sas-token -n &amp;lt;iothubName&amp;gt; -du 31536000'
SharedAccessSignature='SharedAccessSignature sr=iothubname.azure-devices.net&amp;amp;sig=x&amp;amp;se=x&amp;amp;skn=iothubowner'

curl -X POST \
  https://$iothubName.azure-devices.net/twins/$deviceId/modules/CameraTaggingModule/methods?api-version=2018-06-30 \
  -H "Authorization: $SharedAccessSignature" \
  -H 'Content-Type: application/json' \
  -d '{
    "methodName": "push",
    "responseTimeoutInSeconds": 200,
    "payload": {
        "MODULE_NAME":"azureblobstorageoniotedge",
        "STORAGE_PORT":"11002",
        "ACCOUNT_NAME":"camerataggingmodulelocal",
        "ACCOUNT_KEY":"jukoPNlrFwXR/eELSxryaw==",
        "DELETE":"true"
    }
  }'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By combining these together, you can create a bash script &lt;code&gt;automateCapture.sh&lt;/code&gt; to automatically gather images in a loop and store them to the local blob storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

while true; do
        ./capture.sh 'username:password@rtsp_ip' rtsp_port rtsp_path
        sleep 15
        ./capture.sh 'username:password@rtsp_ip' rtsp_port rtsp_path
        sleep 15
        ./capture.sh 'username:password@rtsp_ip' rtsp_port rtsp_path
        sleep 15
        ./capture.sh 'username:password@rtsp_ip' rtsp_port rtsp_path
        sleep 15
        ./push.sh
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you've followed along up until this point, you might realize that we've just laid out the process for a system that can capture training samples automatically.  This means we can gather samples from various times of day in our natural environment without any manual effort! &lt;/p&gt;

&lt;p&gt;What's even more amazing is that the system can work even if outbound network access is cut off to the device!  That's right!  The azureblobstorageoniotedge module will keep a copy locally until network connectivity is achieved where it will then drop the data into an Azure Cloud Storage container.  If the device is configured and networked to a &lt;a href="https://docs.microsoft.com/bs-latn-ba/azure/iot-edge/how-to-create-transparent-gateway?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;Transparent IoT Edge Gateway&lt;/a&gt;, the device can even continue to receive Device Method invocations from the cloud!  If you are interested in learning more about this concept, there is an &lt;a href="https://docs.microsoft.com/en-us/learn/modules/set-up-iot-edge-gateway/?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;interactive module at Microsoft Learn that walks through the process of configuring an IoT Edge Gateway&lt;/a&gt;.  This is a prime example of Azure IoT Edge's powerful capabilities for orchestrating workloads at the edge, it can continue to work even when the device it is running on is disconnected from the internet! &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;We have demonstrated a variety of features present in the &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging" rel="noopener noreferrer"&gt;Azure IoT Edge Camera Tagging module&lt;/a&gt;.  With this guide, you should be able to successfully deploy and capture images for use in training custom image classification and object detection models using &lt;a href="http://customvision.ai?WT.mc_id=devto-cameratagging-pdecarlo" rel="noopener noreferrer"&gt;CustomVision.AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As you have seen, this is an extremely powerful tool that can allow you to gather training samples from the actual site of deployment, potentially while a solution is even in production!  Hopefully this guide has made it very easy to understand how all of the pieces work together and allow you the freedom to deploy it for your own scenarios. &lt;/p&gt;

&lt;p&gt;I am personally excited about the prospect of generating samples using my home security cameras to detect my dog, cat, vehicles and family members.  Armed with this knowledge, it becomes possible to create a highly accurate system that can be queried to produce insights like "How long has the car been in the driveway?",  "Where is the last place the cat was seen?", and "How many cars drove past the driveway last week?".  &lt;/p&gt;

&lt;p&gt;As the paradigm of AIoT continues to flourish, it is my expectation that we will begin to see consumer systems that are capable of tailoring computer vision workloads to specific environments.  The &lt;a href="https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/camera-tagging" rel="noopener noreferrer"&gt;Azure IoT Edge Camera Tagging module&lt;/a&gt; is an excellent tool that makes this process that much closer to a reality.  If you are interested in developing computer vision solutions at the Edge, I highly recommend it!&lt;/p&gt;

&lt;p&gt;Until next time,&lt;/p&gt;

&lt;p&gt;Happy Hacking! &lt;/p&gt;

&lt;p&gt;-Paul&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>ai</category>
      <category>iot</category>
      <category>aiot</category>
    </item>
    <item>
      <title>Building jetson-containers for Nvidia devices on Windows 10 with VS Code and WSL v2</title>
      <dc:creator>Paul DeCarlo</dc:creator>
      <pubDate>Wed, 31 Jul 2019 20:30:59 +0000</pubDate>
      <link>https://forem.com/azure/building-jetson-containers-for-nvidia-devices-on-windows-10-with-vs-code-and-wsl-v2-1ao</link>
      <guid>https://forem.com/azure/building-jetson-containers-for-nvidia-devices-on-windows-10-with-vs-code-and-wsl-v2-1ao</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FEAqvVhfX4AEThem%3Fformat%3Djpg%26name%3Dlarge" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FEAqvVhfX4AEThem%3Fformat%3Djpg%26name%3Dlarge"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, we are going to walk through building &lt;a href="https://github.com/idavis/jetson-containers" rel="noopener noreferrer"&gt;Ian Davis's &lt;code&gt;jetson-containers&lt;/code&gt; project&lt;/a&gt; on Windows 10 using &lt;a href="https://code.visualstudio.com/Download?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-index?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;version 2 of the Windows Subsystem for Linux&lt;/a&gt;. In a nutshell, the &lt;code&gt;jetson-containers&lt;/code&gt; project allows you to build CUDA compatible images for running GPU accelerated applications as containers.  The project supports all current Nvidia Jetson devices (&lt;a href="https://amzn.to/2WFE5zF" rel="noopener noreferrer"&gt;Nano&lt;/a&gt;, &lt;a href="https://amzn.to/3330jju" rel="noopener noreferrer"&gt;TX2&lt;/a&gt;, &lt;a href="https://amzn.to/2XMaSIL" rel="noopener noreferrer"&gt;Xavier&lt;/a&gt; etc.) and also supports third party carrier boards like the &lt;a href="http://connecttech.com/product/orbitty-carrier-for-nvidia-jetson-tx2-tx1/" rel="noopener noreferrer"&gt;Orbitty Carrier for Nvidia Jetson TX2/TX2i/TX1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Containerizing Nvidia CUDA drivers and the necessary runtime dependencies is exciting because it can be coupled with a container orchestrator solution like &lt;a href="https://docs.microsoft.com/en-us/azure/iot-edge/about-iot-edge?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;Azure IoT Edge&lt;/a&gt; to create cloud configurable IoT deployments which take advantage of GPU acceleration capabilities on Nvidia Jetson devices. This opens up a realm of possibilities for computer vision, ML processing, and other AI workloads that can now be deployed into edge environments and updated remotely using a cloud-defined configuration.  In my opinion, this will open the world up to a new paradigm of AI solutions that will be deployed into  areas never before considered and backed by first class software lifecycle support using &lt;a href="https://azure.microsoft.com/en-us/overview/iot?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;IoT services available in Microsoft Azure&lt;/a&gt;.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Installing the Windows Subsystem for Linux on Windows 10
&lt;/h2&gt;

&lt;p&gt;With Windows 10, Microsoft has released a new feature named the &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/about?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;Windows Subsystem for Linux (WSL)&lt;/a&gt;. This feature allows you to run a bash shell directly on Windows in an Ubuntu-based environment. Within this environment you can cross compile images for AARCH64 without the need for a separate Linux VM or server. Note that while WSL can be installed with other Linux variants, such as OpenSUSE, the following instructions have only been tested with Ubuntu.&lt;/p&gt;

&lt;p&gt;This feature is not supported in versions of Windows prior to Windows 10. In addition, it is available only for 64-bit versions of&lt;br&gt;
Windows.&lt;/p&gt;

&lt;p&gt;Full instructions to install WSL are available at &lt;a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;the official Microsoft Docs page for WSL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To install WSL on Windows 10 with Fall Creators Update installed (version &amp;gt;= 16215.0) do the following:&lt;/p&gt;

&lt;p&gt;1). Enable the Windows Subsystem for Linux feature&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Windows Features dialog (&lt;code&gt;OptionalFeatures.exe&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Enable 'Windows Subsystem for Linux'&lt;/li&gt;
&lt;li&gt;Click 'OK' and restart if necessary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2). Install Ubuntu&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Microsoft Store and search for "Ubuntu 18.04" or use &lt;a href="https://www.microsoft.com/store/productId/9N9TNGVNDL3Q" rel="noopener noreferrer"&gt;this link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click Install&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3). Complete Installation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a cmd prompt and type "Ubuntu1804"&lt;/li&gt;
&lt;li&gt;Create a new UNIX user account (this is a separate account from your Windows account)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Step 2: Upgrade Windows Subsystem for Linux to v2
&lt;/h1&gt;

&lt;p&gt;WSL 2 is a new version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its primary goals are to increase file system performance, as well as adding full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and you can run WSL 1 and WSL 2 distros side by side. WSL 2 uses an entirely new architecture that uses a real Linux kernel.&lt;/p&gt;

&lt;p&gt;Support for WSL v2 requires installation of WSL v1 as instructed in the previous section.&lt;/p&gt;

&lt;p&gt;Full instructions for upgrading to WSL v2 are available at &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-install?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;the official Microsoft Docs page for WSL v2&lt;/a&gt;.&lt;br&gt;
To install WSL v2 on Windows 10 (version &amp;gt;= 18917.0) do the following:&lt;/p&gt;

&lt;p&gt;1). Enable the Virtual Machine Platform feature&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Windows Features dialog (&lt;code&gt;OptionalFeatures.exe&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Enable 'Virtual Machine Platform'&lt;/li&gt;
&lt;li&gt;Click 'OK' and restart if necessary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2). Set Ubuntu1804 to be backed by WSL 2&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Powershell and run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    wsl --set-version Ubuntu-18.04 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3). Verify Ubuntu-18.04 is using WSL v2&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Powershell and run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    wsl --list --verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify that the output looks like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      NAME            STATE           VERSION
    * Ubuntu-18.04    Running         2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After confirming that you are running Ubuntu-18.04 using WSL v2, you are ready to begin following the "Cross-Compilation" steps below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Setup Cross-Compilation Environment on Ubuntu using Windows Subsystem for Linux v2
&lt;/h2&gt;

&lt;p&gt;We will use the Ubuntu 18.04 WSL v2 environment to cross-compile AARCH64 compatible jetson-containers images capable of running on Nvidia Jetson hardware.&lt;/p&gt;

&lt;p&gt;1). Install the Nvidia SdkManager into the WSL v2 Environment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;From your Windows host OS, obtain the .deb installer for the Nvidia SdkManager from &lt;a href="https://developer.nvidia.com/nvidia-sdk-manager" rel="noopener noreferrer"&gt;developer.nvidia.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: This requires that you have valid registered account @ developer.nvidia.com&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Place the downloaded .deb installer into c:\sdkmanager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a WSL v2 compatible instance of Ubuntu-18.04 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the sdkmanger by running the following on the bash prompt of the WSL v2 instance:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo apt update
    sudo apt install -y  libcanberra-gtk-module libgconf-2-4 libgtk-3-0 libxss1 libnss3 xvfb  
    sudo dpkg -i /mnt/c/sdkmanager/sdkmanager_*.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2). Install Cross-Compilation tools for AARCH64 support&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the dependencies needed for cross-compilation by running the following on the bash prompt of the WSL v2 instance:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo apt install -y build-essential qemu-user-static binfmt-support

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3). Install Docker&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install a Linux-native instance of Docker by running the following on the bash prompt of the WSL v2 instance:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo apt install -y curl
    curl -fsSL https://get.docker.io | bash
    sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Building Jetson Containers using Windows Subsystem for Linux and Visual Studio Code
&lt;/h2&gt;

&lt;p&gt;After you have setup the Cross-Compilation environment on Ubuntu using Windows Subsystem for Linux v2, you are now ready to beging building jetson-containers.&lt;/p&gt;

&lt;p&gt;1). Configure Visual Studio Code&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://code.visualstudio.com/Download?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt; onto the Windows host OS&lt;/li&gt;
&lt;li&gt;Install and enable the &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl" rel="noopener noreferrer"&gt;Remote WSL extension&lt;/a&gt; from the Visual Studio Marketplace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fkhtrm6whfey6rajqnqz0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fkhtrm6whfey6rajqnqz0.PNG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2). Start docker and binfmt-support services&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These services must be manually started each time a new instance of WSL v2 is started on the host machine.  To start them, execute the following on the bash prompt of the WSL v2 instance:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo service docker start
  sudo service binfmt-support start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: You may want to add these lines to your &lt;code&gt;~/.bashrc&lt;/code&gt; file if you wish to start these services automatically when starting a new WSL v2 instance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4gulm8a7am7jgcslvsf7.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4gulm8a7am7jgcslvsf7.PNG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3). Clone jetson-containers project into WSL v2 environment and Open in VS Code&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the jetson-containers project and open in VS Code by running the following on the bash prompt of the WSL v2 instance:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  git clone https://github.com/idavis/jetson-containers.git
  cd jetson-containers
  code .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4). Run build tasks in VS Code to create jetson-containers images&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the previous step, a new Visual Studio Code instance should have opened.  If not, open a new instance of VS Code on the host OS.  Copy the include .envtemp to .env and supply appropriate values according to the included README.md.  Next, while inside of the VS Code instance, press "CTRL+SHIFT+B" to bring up a list of available build tasks and select one to begin building the associated jetson-containers image(s)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: It is required to build one of the &amp;lt;jetpack-depencies&amp;gt; dependency images before building subsequent containers.  This will require that you have an active account from &lt;a href="https://developer.nvidia.com" rel="noopener noreferrer"&gt;https://developer.nvidia.com&lt;/a&gt; and that you have specified that user as the value of &lt;code&gt;NV_USER&lt;/code&gt; in the .env file you created.  Please be aware that you will be prompted for the account password during the retrieval of the sdks during the build process for any of the  tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fnzy3ltpj9b7lu8crv0fa.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fnzy3ltpj9b7lu8crv0fa.PNG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It is amazing to see that the &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-index?WT.mc_id=devto-jetsoncontainers-pdecarlo" rel="noopener noreferrer"&gt;Windows Subsystem for Linux v2&lt;/a&gt; is capable of building native AARCH64 compatible containers with CUDA support from Visual Studio Code.   WSL v2 brings a fully-compatible Linux Subsystem with 100% Linux syscall compatibility, allowing to build basically anything that you can build in a native Linux environment.  When coupled with the &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl" rel="noopener noreferrer"&gt;Remote WSL extension&lt;/a&gt; , we can perform builds against the Linux Subsystem from Windows 10 allowing for a truly cross-platform development experience.&lt;/p&gt;

&lt;p&gt;What is also amazing is that Ian Davis has curated every currently available Jetson platform into his &lt;a href="https://github.com/idavis/jetson-containers" rel="noopener noreferrer"&gt;&lt;code&gt;jetson-containers&lt;/code&gt; project&lt;/a&gt;, allowing us to build containers for a completely foreign architecture (ARM64) that are compatible with Nvidia GPU accelerated hardware.  Since WSL v2 can allow us to build pretty much anything that Linux can, we can now bring this ability to develop GPU accelerated container solutions to developers on Windows using VS Code with WSL v2.&lt;/p&gt;

&lt;p&gt;As development of &lt;code&gt;jetson-containers&lt;/code&gt; progresses, you can expect to see more details on how to use it in my upcoming articles. For a look at related content on dev.to, check out &lt;a href="https://dev.to/azure/supercharge-your-containerized-iot-workloads-with-gpu-acceleration-on-nvidia-jetson-devices-4532"&gt;Supercharge your containerized IoT workloads with GPU Acceleration on Nvidia Jetson devices&lt;/a&gt;, &lt;a href="https://dev.to/azure/getting-started-with-iot-edge-development-on-nvidia-jetson-devices-2dfl/edit"&gt;Getting Started with IoT Edge Development on Nvidia Jetson Devices&lt;/a&gt;, &lt;a href="https://dev.to/azure/getting-started-with-devops-ci-cd-pipelines-on-nvidia-arm64-devices-4668"&gt;Getting started with DevOps CI / CD Pipelines on Nvidia ARM64 Devices&lt;/a&gt;, and &lt;a href="https://dev.to/azure/using-cognitive-services-containers-with-azure-iot-edge-1e5a"&gt;Using Cognitive Services Containers with Azure IoT Edge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Until next time, Happy Hacking!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>iot</category>
      <category>ai</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
