<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Chris McKenzie</title>
    <description>The latest articles on Forem by Chris McKenzie (@kenzic).</description>
    <link>https://forem.com/kenzic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kenzic"/>
    <language>en</language>
    <item>
      <title>Real-Time Face Tracking in the Browser with MediaPipe</title>
      <dc:creator>Chris McKenzie</dc:creator>
      <pubDate>Mon, 21 Jul 2025 16:06:31 +0000</pubDate>
      <link>https://forem.com/kenzic/real-time-face-tracking-in-the-browser-with-mediapipe-22c9</link>
      <guid>https://forem.com/kenzic/real-time-face-tracking-in-the-browser-with-mediapipe-22c9</guid>
      <description>&lt;h1&gt;
  
  
  Google MediaPipe
&lt;/h1&gt;

&lt;p&gt;Google MediaPipe is a suite of libraries and tools that make it very simple to drop ML into apps — supporting vision, text and audio tasks — without needing to be an ML expert or spin up expensive cloud infrastructure. It runs fast, &lt;strong&gt;on-device&lt;/strong&gt;, and gives you tools to build interactive experiences that work in the real world.&lt;/p&gt;

&lt;p&gt;The standout feature: &lt;strong&gt;on-device inference&lt;/strong&gt; with support for Android, iOS, Python, and the web. That means no round-trips, no user data sent to servers, and minimal latency.&lt;/p&gt;

&lt;p&gt;MediaPipe supports a wide range of use-cases, such as &lt;a href="https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference" rel="noopener noreferrer"&gt;LLM inference&lt;/a&gt;, &lt;a href="https://ai.google.dev/edge/mediapipe/solutions/vision/object_detector" rel="noopener noreferrer"&gt;Object detection&lt;/a&gt;, &lt;a href="https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer" rel="noopener noreferrer"&gt;gesture recognition&lt;/a&gt;, and much more. This would be &lt;em&gt;an epic&lt;/em&gt; if I tried to demo the entire API. To keep it focused, I’ll demo the &lt;a href="https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker" rel="noopener noreferrer"&gt;Face Landmark detection&lt;/a&gt;. For more, see the &lt;a href="https://ai.google.dev/edge/mediapipe/solutions/guide" rel="noopener noreferrer"&gt;docs&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Face Landmark
&lt;/h1&gt;

&lt;p&gt;MediaPipe’s Face Landmarker lets you track 3D face landmarks and expressions in real time — whether from single frames or live video. You get blendshape scores for expressions, 3D points for facial geometry, and matrices for applying effects. Great for filters, avatars, or anything that takes facial input.&lt;/p&gt;

&lt;p&gt;This demo uses the &lt;a href="https://ai.google.dev/edge/mediapipe/solutions/vision/face_detector#blazeface_short-range" rel="noopener noreferrer"&gt;BlazeFace (short-range)&lt;/a&gt; model, which is optimized for selfie cameras.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;At the time of writing, BlazeFace (short-range) is the only model available for this task, but BlazeFace (full-range) and BlazeFace Sparse (full-range) are coming soon, and may be worth checking out if the BlazeFace (short-range) doesn’t work for your use case.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  &lt;em&gt;Demo&lt;/em&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Try the working demo here:&lt;br&gt;
👉&lt;/em&gt; &lt;a href="https://monkey-ears-filter.vercel.app/" rel="noopener noreferrer"&gt;&lt;em&gt;https://monkey-ears-filter.vercel.app&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;or run locally:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clone repo&lt;/strong&gt; &lt;code&gt;git clone git@github.com:kenzic/monkey-ears-filter.git &amp;amp;&amp;amp; cd monkey-ears-filter&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start server&lt;/strong&gt; &lt;code&gt;npm run start&lt;/code&gt; → Then open &lt;a href="http://127.0.0.1:3030/" rel="noopener noreferrer"&gt;http://127.0.0.1:3030/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;monkey-ears-filter/public&lt;/code&gt; folder, the core logic is in &lt;code&gt;filter.js&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  What the code does
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Imports &amp;amp; Constants&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Pulls in FaceLandmarker &amp;amp; FilesetResolver from MediaPipe’s vision tasks.&lt;/li&gt;
&lt;li&gt;  Defines landmark indices for outer eyes and ears.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DOM References&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Gets video and canvas elements and 2D context, setting up an overlay on the webcam feed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ear Image Loader&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  makeEar() returns a new transparent ear image each call&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Webcam Setup (setupCamera)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Requests user media, attaches it to the video element, and waits for metadata before proceeding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Model Initialization (loadFaceLandmarker)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Loads WASM runtime via FilesetResolver.forVisionTasks(…).&lt;/li&gt;
&lt;li&gt;  Creates a FaceLandmarker in LIVE_STREAM mode: GPU delegate, up to 2 faces, outputs blendshapes .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tilt Calculation (getRollAngle)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Computes head tilt using atan2 of two eye landmarks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ear Overlay (drawEars)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Positions, rotates, and draws mirrored ear images at detected ear landmark positions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Render Loop (render)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Matches canvas size to video, runs faceLandmarker.detectForVideo(…), overlays video + ear images, and loops via requestAnimationFrame.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Startup (main)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  On button click: hides UI, starts camera, loads model, and begins the render loop.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main components of this app: loading model, handling the overlay, and rendering it to the screen.&lt;/p&gt;
&lt;h1&gt;
  
  
  🧠 Model Initialization
&lt;/h1&gt;

&lt;p&gt;This part initializes WebAssembly and the face landmarking model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;loadFaceLandmarker&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;filesetResolver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;FilesetResolver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forVisionTasks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.3/wasm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;faceLandmarker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;FaceLandmarker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createFromOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filesetResolver&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;baseOptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;modelAssetPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;delegate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GPU&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;outputFaceBlendshapes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;runningMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;LIVE_STREAM&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;numFaces&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;FilesetResolver.forVisionTasks&lt;/strong&gt;(…) downloads MediaPipe’s WASM runtime optimized for your browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;createFromOptions&lt;/strong&gt;(…) sets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  baseOptions.delegate: “GPU” → uses WebGL/WebGPU to accelerate inferences.&lt;/li&gt;
&lt;li&gt;  outputFaceBlendshapes: true → returns blendshape coefficients for expression data.&lt;/li&gt;
&lt;li&gt;  runningMode: “LIVE_STREAM” → asynchronous video mode for real-time streams.&lt;/li&gt;
&lt;li&gt;  numFaces: 2 → allows up to two faces to be tracked.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: faceLandmarker is now a live-stream-ready tracker with GPU acceleration, expressive blendshape output, and capable of handling up to two faces.&lt;/p&gt;

&lt;h1&gt;
  
  
  🎧 Ear Overlay
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;drawEars&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;makeEar&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// fresh Image each frame&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;left&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;LANDMARKS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LEFT_EAR&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;right&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;LANDMARKS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RIGHT_EAR&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;angle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getRollAngle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// gets head tilt&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;w&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;h&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;translate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;left&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rotate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;angle&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drawImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;restore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;translate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;right&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;right&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rotate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;angle&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// mirror image&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drawImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;restore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Positioning&lt;/strong&gt;: landmarks are normalized ([0,1]), so we multiply by canvas dimensions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tilt rotation&lt;/strong&gt;: angle (via atan2) tilts the ears to match your head roll.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Mirroring right ear&lt;/strong&gt;: we invert with scale(-1,1) so ears attach with correct orientation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getRollAngle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;LANDMARKS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RIGHT_EYE_OUTER&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;B&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;landmarks&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;LANDMARKS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LEFT_EYE_OUTER&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;atan2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;B&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;B&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  🎯 Render Loop
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;videoWidth&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;videoHeight&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;faceLandmarker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectForVideo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;performance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clearRect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drawImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;faceLandmarks&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;drawEars&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;faceLandmarks&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nf"&gt;requestAnimationFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;render&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Sizing&lt;/strong&gt;: ensures canvas matches video.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Inference&lt;/strong&gt;: uses detectForVideo(), feeding current frame + timestamp for the live-stream model.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Drawing&lt;/strong&gt;: clears canvas, redraws video, and overlays ears if at least one face is detected.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Looping&lt;/strong&gt;: requestAnimationFrame(render) makes it go again, achieving real-time performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;MediaPipe doesn’t get much hype, but it’s one of the most practical tools for real-time ML on-device. No servers. No latency. No user data handoffs just to track a face. That’s a big deal for privacy, performance, and reliability.&lt;/p&gt;

&lt;p&gt;You can still use whatever backend stack you like — but with MediaPipe, you don’t &lt;em&gt;have&lt;/em&gt; to. This changes the equation: local-first ML is finally viable. This demo barely scratches the surface. MediaPipe supports gesture detection, object tracking, and even early LLM integration. If you’re building interfaces that respond to people in real time, this deserves a place in your toolbox.&lt;/p&gt;




&lt;p&gt;To stay connected and share your journey, feel free to reach out through the following channels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  👨‍💼 &lt;a href="https://www.linkedin.com/in/christopherjmckenzie/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;: Join me for more insights into AI development and tech innovations.&lt;/li&gt;
&lt;li&gt;  🤖 &lt;a href="https://www.linkedin.com/groups/13176499/" rel="noopener noreferrer"&gt;JavaScript + AI&lt;/a&gt;: Join the JavaScript and AI group and share what you’re working on.&lt;/li&gt;
&lt;li&gt;  💻 &lt;a href="https://github.com/kenzic" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;: Explore my projects and contribute to ongoing work.&lt;/li&gt;
&lt;li&gt;  📚 &lt;a href="https://medium.com/@kenzic" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;: Follow my articles for more in-depth discussions on the intersection of JavaScript and AI.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mediapipe</category>
      <category>javascript</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Bringing AI to the Browser: Transform Your Web App with On-Device Models</title>
      <dc:creator>Chris McKenzie</dc:creator>
      <pubDate>Sun, 20 Jul 2025 01:39:02 +0000</pubDate>
      <link>https://forem.com/kenzic/bringing-ai-to-the-browser-transform-your-web-app-with-on-device-models-1bg1</link>
      <guid>https://forem.com/kenzic/bringing-ai-to-the-browser-transform-your-web-app-with-on-device-models-1bg1</guid>
      <description>&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;Tired of bleeding money on AI API costs? Here’s the fix: run AI models directly in your users’ browsers. I’ve built Browser.AI, a prototype that lets you tap into on-device AI through a simple window.ai API — think localStorage, but for AI. No more API fees, better privacy, and faster performance. Users get control over their data and which models to run, while you get to scale your AI features without scaling costs. Best part? It works offline and isn’t locked to any specific vendor. Check out the prototype, contribute to the &lt;a href="https://github.com/WICG/proposals/issues/178" rel="noopener noreferrer"&gt;W3C proposal&lt;/a&gt;, and let’s make AI more accessible for developers everywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  🚫 No more API costs&lt;/li&gt;
&lt;li&gt;  🔒 Privacy-first (data stays on device)&lt;/li&gt;
&lt;li&gt;  ⚡ Better performance (no server roundtrips)&lt;/li&gt;
&lt;li&gt;  🔌 Works offline&lt;/li&gt;
&lt;li&gt;  🎮 Users control their models&lt;/li&gt;
&lt;li&gt;  💻 Simple window.ai API&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;You’ve built something awesome — an AI-powered app for your startup or passion project. At first, you lean on OpenAI’s API to get it out the door quickly, and things go well. You gain users, feedback is positive, and it feels like you’re on the right track. But then reality hits: the API costs are ballooning, and each new user brings you closer to burning through your budget. Sound familiar?&lt;/p&gt;

&lt;p&gt;This is the problem every small team building AI apps eventually faces: powerful AI capabilities come with a steep price tag. You can’t keep paying these rising costs without killing your runway. But what’s the alternative? Do you compromise on features, slow down your roadmap, or drop out of the AI race altogether?&lt;/p&gt;

&lt;p&gt;No. There’s a smarter solution — &lt;strong&gt;on-device AI models.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By shifting AI workloads directly onto your users’ devices, you can eliminate those crushing API costs, speed up your app, and give users more control over their data. Best of all, this approach makes your product more scalable, with zero trade-offs in performance. You keep your hard-earned dollars, your users keep their privacy, and everyone wins.&lt;/p&gt;

&lt;p&gt;In this article, I’m going to break down the &lt;strong&gt;why&lt;/strong&gt; and &lt;strong&gt;how&lt;/strong&gt; of on-device AI, walk you through a prototype I built called &lt;strong&gt;Browser.AI&lt;/strong&gt;, and show you how this approach is set to transform the way developers integrate AI into their web apps.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Big Idea
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Imagine tapping into powerful AI models directly from the browser, using an API as familiar as&lt;/strong&gt; &lt;code&gt;**window.fetch**&lt;/code&gt; &lt;strong&gt;or&lt;/strong&gt; &lt;code&gt;**window.localStorage**&lt;/code&gt;&lt;strong&gt;.&lt;/strong&gt; No more server-side processing, no more exorbitant API fees. Just seamless, on-device AI that you control.&lt;/p&gt;

&lt;p&gt;Here’s what that could mean for you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Unified API Access:&lt;/strong&gt; A standard &lt;code&gt;window.ai&lt;/code&gt; API that works across all browsers.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Model Flexibility:&lt;/strong&gt; Support for open-source models, so you’re not locked into any vendor.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;User Empowerment:&lt;/strong&gt; Users can choose which models they enable, enhancing privacy and control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just as the MediaDevices API provides you access to hardware like the camera or microphone, the &lt;code&gt;window.ai&lt;/code&gt; API could provide access to AI models installed on the user's device.&lt;/p&gt;

&lt;h1&gt;
  
  
  Who Benefits
&lt;/h1&gt;

&lt;p&gt;This approach benefits anyone building AI-powered apps, from indie developers to large engineering teams, along with their users. Here’s how:&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Get:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Cost Efficiency:&lt;/strong&gt; Say goodbye to skyrocketing API bills. By running AI models directly in the user’s browser, you’re offloading processing to their hardware — at zero extra cost to you.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Improved Performance:&lt;/strong&gt; No more latency from server round-trips. On-device AI can make your app snappier, providing a better user experience.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scalability:&lt;/strong&gt; With the heavy lifting done on the client side, scaling your app to more users doesn’t mean increased server costs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Flexibility and Control:&lt;/strong&gt; You’re not tied to a specific AI provider. Use open-source models and switch them out as you see fit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Users Get:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Enhanced Privacy:&lt;/strong&gt; Data stays on their device, reducing concerns about data being sent to third parties.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Offline Functionality:&lt;/strong&gt; Features work even without an internet connection — think editing a document on a plane with full AI assistance.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Control Over Resources:&lt;/strong&gt; Users can decide which models to enable, giving them more say in their experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You cut costs, and ditch third-party dependencies. Users get data control and a smoother experience.&lt;/p&gt;

&lt;h1&gt;
  
  
  How It Works
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw8lpsc14f9u9t61864s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw8lpsc14f9u9t61864s.png" alt="captionless image" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think of this like accessing speech synthesis or camera features through browser APIs, but for AI models. With processors now optimized for on-device AI, local models like LLMs and CNNs can power features directly on users’ devices, much like Apple, Microsoft, and Google are already doing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Browser-Level Integration:&lt;/strong&gt; The idea is for an API, like &lt;code&gt;window.ai&lt;/code&gt;, built right into the browser to access AI models on the user’s device.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Model Management:&lt;/strong&gt; Users manage which AI models are installed and control site access, similar to camera or microphone permissions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cross-Browser Compatibility:&lt;/strong&gt; A standardized API means developers write code once, and it works across all browsers, regardless of the specific AI model in use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Details
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;JavaScript API:&lt;/strong&gt; The &lt;code&gt;window.ai&lt;/code&gt; API provides methods for permissions, model info, and creating sessions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Permissions:&lt;/strong&gt; Before accessing a model, your app requests permission from the user.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Sessions:&lt;/strong&gt; Once permission is granted, you establish a session with the model to perform tasks like text completion or embeddings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When This Could Happen
&lt;/h2&gt;

&lt;p&gt;Progress is already underway — I’ve submitted a &lt;a href="https://github.com/WICG/proposals/issues/178" rel="noopener noreferrer"&gt;proposal to the W3C&lt;/a&gt;, and if this idea resonates, I’d love your feedback. &lt;a href="https://github.com/explainers-by-googlers/prompt-api/" rel="noopener noreferrer"&gt;Google Chrome&lt;/a&gt; is also exploring this space, but I see some gaps in their approach, which I’ll get into later.&lt;/p&gt;

&lt;h1&gt;
  
  
  Alternatives
&lt;/h1&gt;

&lt;p&gt;Before we dive into Browser.AI, let’s look at existing options and where I see their limitations:&lt;/p&gt;

&lt;h2&gt;
  
  
  window.ai Extension
&lt;/h2&gt;

&lt;p&gt;I really like this project — they’re doing a lot of smart things, especially in giving users control over which models they use. Definitely worth checking out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Gives users control over which models to use.&lt;/li&gt;
&lt;li&gt;  Supports both local and remote models.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Adoption Barrier:&lt;/strong&gt; Users need to install an extension — not ideal for mass adoption.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Technical Complexity:&lt;/strong&gt; Geared towards users comfortable with setting up API keys and local models.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Mobile Limitations:&lt;/strong&gt; Extensions aren’t available on mobile browsers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Chrome window.ai Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What It Is:&lt;/strong&gt; An experimental API built into Chrome, offering basic AI features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Integrated directly into the browser — no extension required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Limited Models:&lt;/strong&gt; Uses Google’s own models, which tend to perform poorly compared to alternatives.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Ecosystem Fragmentation:&lt;/strong&gt; Risks each browser supporting only their preferred models.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lack of Flexibility:&lt;/strong&gt; Limited control for developers and users over which models to use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  transformers.js
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What It Is:&lt;/strong&gt; A JavaScript library that runs AI models in the browser using WebAssembly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Runs entirely in the browser — no server required.&lt;/li&gt;
&lt;li&gt;  Leverages open-source models.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Redundant Downloads:&lt;/strong&gt; Each website must package and serve the model, leading to duplicate downloads.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Performance Overhead:&lt;/strong&gt; Large models can impact load times and performance.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Complex Integration:&lt;/strong&gt; Managing models and performance optimization can be challenging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While I do think there are downsides, it’s still a great project. If you’re interested in experimenting with this approach, you can check out my article, &lt;a href="https://medium.com/@kenzic/run-models-in-the-browser-with-transformers-js-2d0983ba3ce9" rel="noopener noreferrer"&gt;Run Models in the Browser With Transformers.js&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introducing Browser.AI
&lt;/h1&gt;

&lt;p&gt;The future of AI on the web isn’t about relying on costly third-party services. It’s about putting power back in the hands of developers and users. That’s where &lt;strong&gt;Browser.AI&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnox3az9cf46nec8mbgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnox3az9cf46nec8mbgb.png" alt="captionless image" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s designed around three key principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Keep it simple&lt;/strong&gt;: An intuitive API that feels natural for JavaScript developers.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Flexibility and control for users and developers&lt;/strong&gt;: Users choose which models to allow, and developers pick the models that fit their apps — ensuring cross-browser compatibility and avoiding vendor lock-in. a.k.a., model-agnostic.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Boost performance and privacy&lt;/strong&gt;: Running AI locally reduces latency, keeps data private, and offline functionality for a smoother experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Browser.AI is a &lt;strong&gt;working prototype&lt;/strong&gt; that demonstrates how developers can run AI models directly on users’ devices through an API on the &lt;code&gt;window&lt;/code&gt; object. This approach enhances privacy, speed, and offline capabilities for web apps.&lt;/p&gt;

&lt;h1&gt;
  
  
  Developing with the API
&lt;/h1&gt;

&lt;p&gt;Try it firsthand, you can &lt;a href="https://browser.christophermckenzie.com/" rel="noopener noreferrer"&gt;download the app&lt;/a&gt;, or &lt;a href="https://github.com/kenzic/browser.ai" rel="noopener noreferrer"&gt;clone the repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For more information on the architecture, see the README in the project repo. The API is as follows:&lt;/p&gt;

&lt;h1&gt;
  
  
  Browser.AI API
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Permissions API
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Permissions API&lt;/strong&gt; allows developers to request permission for using specific AI models and see which models are already enabled.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Request permission&lt;/strong&gt;:
&lt;code&gt;window.ai.permissions.request({ model: 'name', silent: true })&lt;/code&gt;
Prompts the user to grant permission for a model (e.g., &lt;code&gt;'llama3.2'&lt;/code&gt;). Returns &lt;code&gt;true&lt;/code&gt; if approved.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;List enabled models&lt;/strong&gt;:
&lt;code&gt;window.ai.permissions.models()&lt;/code&gt;
Retrieves a list of models the user has enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Model API
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Model API&lt;/strong&gt; allows developers to connect to a specific AI model and retrieve information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Get model info&lt;/strong&gt;:
&lt;code&gt;window.ai.model.info({ model: 'name' })&lt;/code&gt;
Provides details about the specified model.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Connect to model&lt;/strong&gt;:
&lt;code&gt;window.ai.model.connect({ model: 'name' })&lt;/code&gt;
Establishes a session with the model, enabling further interaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Session API
&lt;/h2&gt;

&lt;p&gt;Once connected to a model, the &lt;strong&gt;Session API&lt;/strong&gt; lets you interact with the model via chat or embeddings.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Chat&lt;/strong&gt;:
&lt;code&gt;session.chat({ messages: [{ role: 'user', content: 'hello' }] })&lt;/code&gt;
Sends a message to the AI model and gets a response.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Embed&lt;/strong&gt;:
&lt;code&gt;session.embed({ input: 'text to embed' })&lt;/code&gt;
Generates embeddings from the input text.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;More coming soon&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Simple Code Example
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;llama3.2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;silent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;llama3.2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Try the Prototype&lt;/strong&gt;: Download the Browser.AI prototype and integrate AI features like chat or text completion into your projects. You’ll find instructions and examples in the README. Hands-on experimentation with on-device models will show you how they can enhance your apps.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Share Feedback on the WICG Proposal&lt;/strong&gt;: Check out the &lt;a href="https://github.com/WICG/proposals/issues/178" rel="noopener noreferrer"&gt;WICG proposal&lt;/a&gt; for Browser.AI and share your thoughts on the technical implementation, privacy, and cross-browser compatibility. Your feedback will help refine the API’s design, so weigh in on key areas like model permissions and session handling.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Contribute to the Project&lt;/strong&gt;: If you’re a developer, contribute to Browser.AI on GitHub — improve the code, suggest features, or fix bugs. Non-coders can help by testing the prototype, reporting issues, or suggesting ideas for improve user control. Every contribution counts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;AI is transforming how we build and interact with web applications, but the costs and dependencies on third-party providers can be a huge barrier for small teams and indie developers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafjdfpbhcfxvfov5ur2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafjdfpbhcfxvfov5ur2k.png" alt="captionless image" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On-device models offer a way out — a chance to cut costs, reduce latency, and give users more control over their data. Browser.AI is just one approach to making this vision a reality. By providing a simple, standardized API, we can empower developers to scale their AI-powered features without relying on expensive services, all while prioritizing performance and privacy for users.&lt;/p&gt;

&lt;p&gt;This is fertile ground. There’s a ton of potential here and with feedback from the community, we can build something that works for both developers and users. Whether you’re just getting into AI or trying to scale your product without breaking the bank, jump in, test out the prototype, and let me know what you think. Let’s make this happen together.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/WICG/proposals/issues/178" rel="noopener noreferrer"&gt;WICG/proposals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://browser.christophermckenzie.com/" rel="noopener noreferrer"&gt;browser.christophermckenzie.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kenzic/browser.ai" rel="noopener noreferrer"&gt;GitHub - kenzic/browser.ai&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To stay connected and share your journey, feel free to reach out through the following channels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  👨‍💼 &lt;a href="https://www.linkedin.com/in/christopherjmckenzie/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;: Join me for more insights into AI development and tech innovations.&lt;/li&gt;
&lt;li&gt;  🤖 &lt;a href="https://www.linkedin.com/groups/13176499/" rel="noopener noreferrer"&gt;JavaScript + AI&lt;/a&gt;: Join the JavaScript and AI group and share what you’re working on.&lt;/li&gt;
&lt;li&gt;  💻 &lt;a href="https://github.com/kenzic" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;: Explore my projects and contribute to ongoing work.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>browser</category>
      <category>javascript</category>
      <category>api</category>
    </item>
    <item>
      <title>Getting Started: Build a Model Context Protocol Server</title>
      <dc:creator>Chris McKenzie</dc:creator>
      <pubDate>Mon, 02 Jun 2025 15:48:02 +0000</pubDate>
      <link>https://forem.com/kenzic/getting-started-build-a-model-context-protocol-server-2bm7</link>
      <guid>https://forem.com/kenzic/getting-started-build-a-model-context-protocol-server-2bm7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Streamlining LLM Integration: Building a JavaScript MCP Server for Hacker News&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Integrating LLMs into real products still feels messier than it should be. Instead of clean patterns or shared infrastructure, most devs end up hacking together one-off code to connect models with APIs, databases, or business logic - none of it reusable, scalable, or easy to maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Model Context Protocol (MCP)&lt;/strong&gt; aims to fix that. It's a minimal, open standard from Anthropic. It provides a unified way for exposing tools, data, and prompts to language models in a structured, predictable way. Instead of building a new integration layer for every app or agent, MCP gives you a common interface - and it already works with applications like Claude Desktop, Cursor, and Windsurf.&lt;/p&gt;

&lt;p&gt;In this article, we'll walk through the basic steps of building a simple MCP server in JavaScript that will expose two tools for interacting with Hacker News.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR on Model Context Protocol
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you're already familiar with MCP, you can skip ahead to the "Demo" section.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Model Context Protocol (&lt;strong&gt;MCP&lt;/strong&gt;) is an open standard for how applications provide data and tools to an LLM. Think of it like an electrical outlet for AI - one universal way to connect models to content, business tools, standard prompts. Instead of custom integrations for every data source, MCP enables secure, scalable, and efficient AI connections, so developers can focus on building amazing products instead of maintaining endless adapters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clients
&lt;/h3&gt;

&lt;p&gt;MCP clients sit between your app (like a chatbot or AI-powered IDE) and an MCP server. They handle the connection, manage discovery and security, and translate the app's requests into something the server can understand. Popular clients include Claude Desktop, Cursor, and Windsurf. You can find the full list in the docs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Servers
&lt;/h3&gt;

&lt;p&gt;An MCP server connects to your data - files, APIs, databases - and exposes it to the model through a standard interface. It can provide &lt;strong&gt;tools&lt;/strong&gt; (functions the model can call), &lt;strong&gt;resources&lt;/strong&gt; (data the model can read), and &lt;strong&gt;prompts&lt;/strong&gt; (predefined inputs to guide model behavior).&lt;/p&gt;

&lt;p&gt;Under the hood, both clients and servers use JSON-RPC 2.0 for communication and support multiple transport layers like stdio and HTTP via Server-Sent Events (SSE).&lt;/p&gt;

&lt;p&gt;You can browse the &lt;a href="https://github.com/modelcontextprotocol/servers" rel="noopener noreferrer"&gt;server implementations on GitHub&lt;/a&gt;, or dive deeper with the &lt;a href="https://medium.com/@kenzic/getting-started-model-context-protocol-e0a80dddff80" rel="noopener noreferrer"&gt;Getting Started guide&lt;/a&gt; or the &lt;a href="https://modelcontextprotocol.io/introduction" rel="noopener noreferrer"&gt;official docs&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Concepts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Tools&lt;/strong&gt; - Functions exposed to the model - can be called by the LLM&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Resources&lt;/strong&gt; - Files, APIs, or datasets the LLM can access&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Prompts&lt;/strong&gt; - Predefined templates that help the LLM act consistently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this demo, we'll focus on tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo: Build a Hacker News MCP Server
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkv64geynynzh92s33yct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkv64geynynzh92s33yct.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's build an MCP server that exposes two tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;list-top-stories – returns top 10 Hacker News posts&lt;/li&gt;
&lt;li&gt;story-details – fetches detailed info on a selected post&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We'll use @modelcontextprotocol/sdk and wire it up with a simple HN API client.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I've provided a starting point which handles some of the boilerplate of getting a project up and running, as well as a simple Hacker News API Client so we can fetch the data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 1: Setup Starter Project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the repo&lt;br&gt;
&lt;code&gt;git clone git@github.com:kenzic/mcp-hacker-news-demo.git&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The main branch contains the final project, so we'll switch to the demo branch for our starting point.&lt;br&gt;
&lt;code&gt;cd mcp-hackernews-demo &amp;amp;&amp;amp; git checkout -b origin/demo demo&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, let's install the dependencies&lt;br&gt;
&lt;code&gt;npm i&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that we have the boilerplate setup, we're ready to dive in. Open the project in your editor, and notice two files: &lt;code&gt;index.js&lt;/code&gt; and &lt;code&gt;hacker-client.js&lt;/code&gt;. The client is complete as is, but I encourage you to build on top of it to add new functionality. The &lt;code&gt;index.js&lt;/code&gt; file is where we'll be doing all our coding.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Build Our Server
&lt;/h3&gt;

&lt;p&gt;For this, we're going to use McpServer from the &lt;a href="https://github.com/modelcontextprotocol/typescript-sdk" rel="noopener noreferrer"&gt;modelcontextprotocol SDK&lt;/a&gt;. The SDK supports: Resources, Tools, Prompts. However for this demo we'll only need tools.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 If you need more control over the server implementation you can use the &lt;a href="https://github.com/modelcontextprotocol/typescript-sdk?tab=readme-ov-file#low-level-server" rel="noopener noreferrer"&gt;Server class&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At the top add the following imports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;McpServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@modelcontextprotocol/sdk/server/mcp.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a new instance of the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;McpServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hacker News&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;capabilities&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create Tools
&lt;/h3&gt;

&lt;p&gt;In this demo we'll be crating two tools: &lt;strong&gt;List Top Stories&lt;/strong&gt;, and &lt;strong&gt;Story Details&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;List Top Stories&lt;/strong&gt; provides the MCP client with a list of the top 10 stories from Hacker News.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Story Details&lt;/strong&gt; provides in-depth details of the Hacker News post, including the contents of the url the post links to.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create a tool, simply add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;TOOL_NAME&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;toolParam&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// logic&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;RETURN TEXT&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As defined above, this tool isn't very helpful.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For both tools we'll be using the content return type of text, however, the &lt;a href="https://modelcontextprotocol.io/specification/2025-03-26" rel="noopener noreferrer"&gt;specifications&lt;/a&gt; support: text, image, resource. Additionally, you can return multiple objects - even of different types.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;List Top Stories Tool:&lt;/strong&gt; Let's update the tool to support listing the top stories on Hacker News. We'll start by giving it an easy-to-understand name: &lt;code&gt;list-top-stories&lt;/code&gt;. Next, we'll want to fetch the top stories and format the results as text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;topStories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listTopStories&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formattedTopStories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;formatTopStories&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;topStories&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, we'll return the results.&lt;/p&gt;

&lt;p&gt;The final code should look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;list-top-stories&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;topStories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listTopStories&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formattedTopStories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;formatTopStories&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;topStories&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;formattedTopStories&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! Now that we have a list of the top ten stories on Hacker News, let's build a tool that will allow us to retrieve the details of the story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Story Detail Tool:&lt;/strong&gt; We'll create a new tool called story-details, then use the getStoryDetails method on the Hacker News client to fetch and format the story. But how does it know which story to fetch? MCP tools let you define params. The second argument to tool takes an object with the params name and types. We'll add the param &lt;code&gt;{ id: z.number() }&lt;/code&gt; so the client can pass the article id to the tool. Your final code should look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;story-details&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;number&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;story&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getStoryDetails&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formattedStory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;formatStoryDetail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;story&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;formattedStory&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Transports:&lt;/strong&gt; We're almost done! Now we need to provide a method for our server to communicate with the client. As of writing, MCP SDK supports two modes of communication: Standard Input/Output (stdio) and Server-Sent Events (SSE). For this demo we'll use stdio.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 See the docs for more info on the protocols, or how to define a Custom Protocol.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Start by importing &lt;code&gt;StdioServerTransport&lt;/code&gt; at the top of your file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;StdioServerTransport&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@modelcontextprotocol/sdk/server/stdio.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll create an instance of StdioServerTransport and connect our server. Drop the following at the bottom of the &lt;code&gt;index.js&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StdioServerTransport&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transport&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Boom&lt;/strong&gt; - you've created a valid MCP server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting our server to a client
&lt;/h2&gt;

&lt;p&gt;That's it - we've got a working MCP server. Now we just need to tell a client where to find it. If you're using Claude Desktop, follow the steps below to connect it. If not, skip ahead to the Debugging section to inspect your server using the MCP Inspector tool.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ We're running this server locally. You may notice most servers are configured to run using npx, but we'll be using node to run the server locally. Because of this you'll need to make sure you npm install the dependencies (which you already did if you followed the setup instructions).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Claude Desktop:&lt;/strong&gt; In Claude, go to your settings, then Developer tab and click Edit Config. Once there add the "hackernews" config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hackernews"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"/path-to-folder/mcp-hacker-news-demo/index.js"&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude, and prompt,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"tell me about the top stories on hacker news."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If everything worked you should (after giving permission to use the tool) see a list of stories provided by the server. Next, you can ask it to tell you more about a specific post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging &amp;amp; Dev Tools
&lt;/h2&gt;

&lt;p&gt;As important as knowing how to build something is knowing how to fix it when it's not working as expected. I'll cover a few methods for debugging, however, this is not exhaustive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqnlkx5c47tz1d58gvn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqnlkx5c47tz1d58gvn2.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging
&lt;/h3&gt;

&lt;p&gt;Adding logging to your server is straightforward. To log a message, simply call &lt;code&gt;server.server.sendLoggingMessage&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 You might notice we type server twice. This is because &lt;code&gt;McpServer&lt;/code&gt; does not expose a method for logging, and we must rely on the low-level implementation of &lt;a href="https://github.com/modelcontextprotocol/typescript-sdk?tab=readme-ov-file#low-level-server" rel="noopener noreferrer"&gt;Server&lt;/a&gt;, which is accessible on the property server&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Creating logs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;list-top-stories&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;topStories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listTopStories&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formattedTopStories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;formatTopStories&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;topStories&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// Add logging&lt;/span&gt;
        &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendLoggingMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;info&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Fetched &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;topStories&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; top stories.`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;formattedTopStories&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendLoggingMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Failed to fetch stories: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Inform the client about the failure&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Reading Logs:&lt;/strong&gt; If you're using the Claude Desktop app, you can view all event messages in the logs folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 20 &lt;span class="nt"&gt;-F&lt;/span&gt; ~/Library/Logs/Claude/mcp-server-hackernews.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MCP Inspector
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://modelcontextprotocol.io/docs/tools/inspector" rel="noopener noreferrer"&gt;MCP Inspector&lt;/a&gt; is a simple GUI devtool for testing and debugging MCP servers.&lt;/p&gt;

&lt;p&gt;To run the inspector on your Hacker News server, simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @modelcontextprotocol/inspector node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it's ready, open the url in your browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bqtyqu97eiitnogxdvj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bqtyqu97eiitnogxdvj.jpg" alt="Image description" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect and click &lt;strong&gt;List tools&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59bkvug5ndi34qn3428w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59bkvug5ndi34qn3428w.jpg" alt="Image description" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;list-top-stories&lt;/strong&gt; and then &lt;strong&gt;Run Tool&lt;/strong&gt;. Grab an ID from the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy36pb63jh6jf3ez5zcsm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy36pb63jh6jf3ez5zcsm.jpg" alt="Image description" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;story-details&lt;/strong&gt;, then drop in the ID and click &lt;strong&gt;Run Tool&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbae62fz6sf4owmzms98l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbae62fz6sf4owmzms98l.jpg" alt="Image description" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything worked you should see the results, and the history of calls. You can expand the history items to see the detailed view of the request and response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Want to keep building?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Add support for Ask HN and Polls&lt;/strong&gt; Right now we're only pulling top stories, but Hacker News has other types of content like Ask HN posts and polls. Add endpoints to fetch those and expose them as new tools - same pattern, just different data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Let the user choose how many stories to list&lt;/strong&gt; Instead of hardcoding 10 stories, update the list-top-stories tool to accept a count parameter. It's a small change, but it makes the tool more flexible and helps illustrate how you can pass arguments into your MCP tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore Resources and Prompts&lt;/strong&gt; This demo focused on tools, but MCP also supports resources (like documents or database queries) and prompts (predefined input templates). Try exposing a local file or API response as a resource, or create a prompt that wraps HN data in a consistent format the model can use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;MCP is one of those things that just makes sense once you try it. Instead of hacking together context with custom glue code, you get a simple, predictable way to expose tools, data, and prompts to language models - without reinventing the wheel every time.&lt;/p&gt;

&lt;p&gt;This demo is intentionally simple, but the same approach scales. Want to let a model trigger internal workflows? Query production data? Run functions securely? MCP gives you the interface without locking you into someone else's stack.&lt;/p&gt;

&lt;p&gt;If you're building anything serious with LLMs, it's worth getting familiar with the protocol now. The ecosystem's moving fast - and it's a lot easier to build the right abstractions when you start with the right foundation.&lt;/p&gt;




&lt;p&gt;To stay connected and share your journey, feel free to reach out through the following channels:&lt;br&gt;
👨‍💼 &lt;a href="https://www.linkedin.com/in/christopherjmckenzie/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;: Join me for more insights into AI development and tech innovations.&lt;br&gt;
🤖 &lt;a href="https://www.linkedin.com/groups/13176499/" rel="noopener noreferrer"&gt;JavaScript + AI&lt;/a&gt;: Join the JavaScript and AI group and share what you're working on.&lt;br&gt;
💻 &lt;a href="https://github.com/kenzic" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;: Explore my projects and contribute to ongoing work.&lt;br&gt;
📚 &lt;a href="https://medium.com/@kenzic" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;: Follow my articles for more in-depth discussions on LangSmith, LangChain, and other AI technologies.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Transform Your Workflow with LangSmith Hub: A Game-Changer for JavaScript Engineers</title>
      <dc:creator>Chris McKenzie</dc:creator>
      <pubDate>Thu, 05 Sep 2024 01:15:32 +0000</pubDate>
      <link>https://forem.com/kenzic/transform-your-workflow-with-langsmith-hub-a-game-changer-for-javascript-engineers-4a8l</link>
      <guid>https://forem.com/kenzic/transform-your-workflow-with-langsmith-hub-a-game-changer-for-javascript-engineers-4a8l</guid>
      <description>&lt;p&gt;Are scattered AI prompts slowing down your development process? Discover how LangChain Hub can revolutionize your workflow, making prompt management seamless and efficient for JavaScript engineers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine managing a project with crucial information scattered across files. Frustrating, right? This is the reality for developers dealing with AI prompts. LangChain Hub centralizes prompt management, transforming workflows just as GitHub did for code collaboration.&lt;/p&gt;

&lt;p&gt;LangChain Hub provides an intuitive interface for uploading, browsing, pulling, collaborating, versioning, and organizing prompts. This not only streamlines workflows but also fosters collaboration and innovation, making it an essential tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features and Benefits
&lt;/h2&gt;

&lt;p&gt;LangChain Hub is a powerful tool designed for JavaScript developers to centralize, manage, and collaborate on AI prompts efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Community-Driven Innovation
&lt;/h3&gt;

&lt;p&gt;Explore prompts from other developers, gaining new ideas and solutions. Learn new techniques, improve existing prompts, and foster a collaborative environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Centralized Prompt Management
&lt;/h3&gt;

&lt;p&gt;LangChain Hub brings all your AI prompts under one roof, eliminating the chaos of scattered files and fragmented storage. With everything neatly organized in one place, managing your prompts has never been easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  User-Friendly Interface
&lt;/h3&gt;

&lt;p&gt;Navigating LangChain Hub is a breeze, thanks to its intuitive design. Uploading, browsing, and managing your prompts is straightforward, boosting your productivity and minimizing the time spent on learning the tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collaboration and Sharing
&lt;/h3&gt;

&lt;p&gt;LangChain Hub makes it simple to share and collaborate on prompts with your team. This seamless sharing fosters innovation and collective problem-solving, making teamwork more efficient and effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Version Control
&lt;/h3&gt;

&lt;p&gt;Never lose track of your prompt iterations with LangChain Hub's version control. You can easily revert to previous versions or monitor changes over time, ensuring you always have access to the best version of your prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced Search and Filtering
&lt;/h3&gt;

&lt;p&gt;Find the prompts you need in no time with advanced search and filtering options. You can filter prompts by use-case, type, language, and model, ensuring you quickly access the most relevant resources. These features save you time and enhance your workflow, making prompt management more efficient and tailored to your specific project needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customization and Flexibility
&lt;/h3&gt;

&lt;p&gt;Tailor prompts to your specific project requirements effortlessly. LangChain Hub's customization options ensure your prompts fit seamlessly into your development process, adapting to your unique needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using in Your Project
&lt;/h2&gt;

&lt;p&gt;Let's set up a project to use prompt templates in LangChain Hub to highlight its value.&lt;br&gt;
We'll start by using the demo project I created for the article &lt;a href="https://medium.com/@kenzic/getting-started-langsmith-for-javascript-llm-apps-0bb8059a83ee" rel="noopener noreferrer"&gt;Getting Started: LangSmith for JavaScript LLM Apps&lt;/a&gt;. While I encourage you to read that article, it's not required to follow along.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone &lt;a href="https://github.com/kenzic/simple-langsmith-demo" rel="noopener noreferrer"&gt;repo&lt;/a&gt;: &lt;code&gt;git clone git@github.com:kenzic/simple-langsmith-demo.git&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;cd simple-langsmith-demo&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Checkout the demo branch: &lt;code&gt;git checkout -b langchain-hub-demo origin/langchain-hub-demo&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Install dependencies &lt;code&gt;yarn&lt;/code&gt; or &lt;code&gt;npm i&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Sign up for &lt;a href="https://smith.langchain.com/" rel="noopener noreferrer"&gt;LangSmith Account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Get a &lt;a href="https://smith.langchain.com/settings" rel="noopener noreferrer"&gt;LangSmith API Key&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Get &lt;a href="https://platform.openai.com/docs/overview" rel="noopener noreferrer"&gt;OpenAI API Key&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Move &lt;code&gt;.env.example&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt; and fill in the following values:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LANGCHAIN_PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"langsmith-demo"&lt;/span&gt; &lt;span class="c"&gt;# Name of your LangSmith project&lt;/span&gt;
&lt;span class="nv"&gt;LANGCHAIN_TRACING_V2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="c"&gt;# Enable advanced tracing features&lt;/span&gt;
&lt;span class="nv"&gt;LANGCHAIN_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-api-key&amp;gt; &lt;span class="c"&gt;# Your LangSmith API key&lt;/span&gt;

&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-openai-api-key&amp;gt; &lt;span class="c"&gt;# Your OpenAI API key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The demo app responds to the question &lt;strong&gt;"What is the capital of France?"&lt;/strong&gt; in the voice of Mr. Burns from the Simpsons. To accomplish this we use the following prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided.
The tone of your response should be that of The Simpsons' Mr. Burns.

&amp;lt;context&amp;gt;
{context}
&amp;lt;/context&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The prompt is currently hardcoded in the app, which is manageable for now. However, in a real-world application, this approach can become difficult to manage. As we add more steps and multiple prompts to the chain, it can quickly become confusing and hard to maintain. Therefore, let's move our prompt to LangChain Hub.&lt;/p&gt;

&lt;p&gt;If you followed the steps above, you should have a LangSmith account.&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://smith.langchain.com/hub" rel="noopener noreferrer"&gt;smith.langchain.com/hub&lt;/a&gt; and click "New Prompt."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu9atcy30klsy6t388yt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu9atcy30klsy6t388yt.png" alt="LangChain Hub Overview screen" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll then want to give your prompt a name, set visibility (default private), description, readme, use case, language, and model. Note: the owner is "&lt;a class="mentioned-user" href="https://dev.to/kenzic"&gt;@kenzic&lt;/a&gt;", this will be different for you. See the screenshot for values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ymtbkprlszrulrtms5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ymtbkprlszrulrtms5v.png" alt="LangChain Hub Create New Prompt screen" width="800" height="1221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've created your prompt, you'll want to select the prompt type. For this task, we'll select "Chat Prompt".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug1064c2flznu97c06ko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug1064c2flznu97c06ko.png" alt="LangChain Hub Select Prompt Type screen" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a "System" message with the value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided.
The tone of your response should be that of The Simpsons' Mr. Burns.

&amp;lt;context&amp;gt;
{context}
&amp;lt;/context&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a "Human" message with the value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please address the following inquiry:\n{input}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizeaxiwhewedx96wa54b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizeaxiwhewedx96wa54b.png" alt="LangChain Hub Prompt Playground" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we commit this, we can test it out in the playground. To the right of the message chain, you will notice the section "Inputs" with the variables we specified in the messages. To confirm it's working as expected, I tested with the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;context:&lt;/strong&gt; The capital of France is Springfield. It was Paris but changed in 2024.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;input:&lt;/strong&gt; What is the capital of France&lt;/p&gt;

&lt;p&gt;Once you have the Inputs defined, under Settings you'll want to select the model we're testing against. Select GPT-3.5-turbo. For this to work you'll need to add your OpenAI API key by clicking the "Secrets &amp;amp; API Keys" button. Great, now we're ready to test. Click the "Start" button and watch it generate the output. You should see something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ah, yes, the capital of France, or should I say, Springfield! Paris may have been the capital in the past, but as of 2024, Springfield reigns supreme as the new capital of France. A change of this magnitude surely raises questions and eyebrows, but rest assured, the decision has been made and Springfield now holds the title of the capital of France. How utterly delightful!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once we're happy with our prompt, we need to commit it. Simply click the "Commit" button!&lt;/p&gt;

&lt;p&gt;Great, now that we have a finished prompt we'll want to update our code to reference it instead of the hardcoded prompt template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F777juwz47xfed01uxq38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F777juwz47xfed01uxq38.png" alt="coded prompt template" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, we need to import the hub function to pull our template into our code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;hub&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;langchain/hub&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's delete the ChatPromptTemplate in the code and replace it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;answerGenerationChainPrompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;hub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pull&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[YOURORG]/mr-burns-answer-prompt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: You can delete the &lt;code&gt;ANSWER_CHAIN_SYSTEM_TEMPLATE&lt;/code&gt; variable too&lt;/p&gt;

&lt;p&gt;Finally, let's test it out! run yarn start to execute the script. If everything works properly, you will see the output in the voice of Mr. Burns informing you the capital of France is Paris.&lt;/p&gt;

&lt;p&gt;If you want to take it a step further, you can lock your prompts by the version. To do this, simply append a colon and the version number to the end of the name like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;answerGenerationChainPrompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;hub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pull&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[YOURORG]/mr-burns-answer-prompt:[YOURVERSION]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// for me it looks like:&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;answerGenerationChainPrompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;hub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pull&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kenzic/mr-burns-answer-prompt:d123dc92&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;We've explored how LangChain Hub centralizes prompt management, enhances collaboration, and integrates into your workflow. To improve your efficiency with LangChain Hub, consider diving deeper into the customization and integration possibilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;LangChain Hub is more than a tool; it's a catalyst for innovation and collaboration in AI development. Embrace this revolutionary platform and elevate your JavaScript LLM applications to new heights.&lt;/p&gt;

&lt;p&gt;Throughout this guide, we tackled how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Centralize and manage your AI prompts effectively using LangChain Hub.&lt;/li&gt;
&lt;li&gt;Enhance collaboration and version control within your development team.&lt;/li&gt;
&lt;li&gt;Integrate prompt management seamlessly into your existing development workflows.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Keep building and experimenting, and I'm excited to see how you'll push the boundaries of what's possible with AI and LangChain Hub!&lt;/p&gt;




&lt;p&gt;To stay connected and share your journey, feel free to reach out through the following channels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👨‍💼 &lt;a href="https://www.linkedin.com/in/christopherjmckenzie/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;: Join me for more insights into LLM development and tech innovations.&lt;/li&gt;
&lt;li&gt;💻 &lt;a href="https://github.com/kenzic" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;: Explore my projects and contribute to ongoing work.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>langchain</category>
      <category>promptengineering</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Convert YouTube videos into Mind Maps using ChatGPT</title>
      <dc:creator>Chris McKenzie</dc:creator>
      <pubDate>Mon, 25 Sep 2023 01:24:25 +0000</pubDate>
      <link>https://forem.com/kenzic/convert-youtube-videos-into-mind-maps-using-chatgpt-2lki</link>
      <guid>https://forem.com/kenzic/convert-youtube-videos-into-mind-maps-using-chatgpt-2lki</guid>
      <description>&lt;p&gt;As a visual person I love Mind Maps. They’re a great way to visually organize complex ideas.&lt;/p&gt;

&lt;p&gt;This got me thinking — Is it possible to get ChatGPT to generate a mind map highlighting the key concepts in a Youtube Video? Yes!&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;You will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT 4&lt;/li&gt;
&lt;li&gt;VoxScript plugin (search ChatGPT plugin store for “VoxScript”)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mermaid.live/" rel="noopener noreferrer"&gt;https://mermaid.live/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Mermaid
&lt;/h2&gt;

&lt;p&gt;Mermaid is a JavaScript based diagramming and charting tool that uses Markdown-inspired text definitions and a renderer to create and modify complex diagrams. The main purpose of Mermaid is to help documentation catch up with development. mermaid.live takes the mermaid plain text format and creates visuals.&lt;/p&gt;

&lt;p&gt;That’s about as much as you need to know to accomplish the task, but if you’re interested in learning more about the format read their &lt;a href="https://mermaid.js.org/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  VoxScript
&lt;/h2&gt;

&lt;p&gt;VoxScript is a plugin that converts a YouTube url into a transcript, which ChatGPT can use as context to complete your prompt. This is really handy for a number of tasks, including summarizing long youtube videos, pulling out key insights, or creating outlines. For this, we’ll be using it to give ChatGPT the context it needs to generate a mind map.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt
&lt;/h2&gt;

&lt;p&gt;The prompt for this is simple. We start by giving ChatGPT the instruction to read the transcript of a video, extract key concepts, and convert those into a mermaid mind map format. To insure ChatGPT understands this format we provide an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3dqfi8gbjrg83qfmqlx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3dqfi8gbjrg83qfmqlx.png" alt="Prompt" width="800" height="827"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or in plain text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output the following request in Mermaid Live Editor code
targeting the "mindmap" format. Get the entire transcript for
this video: [VIDEO] then extract the key concepts, why they're
imporant, related information, and make logical connections. If
you're unfamiliar with the mindmap diagram, I'll provide an
example:
"""
mindmap
  root((mindmap))
    Origins
      Long history
      ::icon(fa fa-book)
      Popularisation
        British popular psychology author Tony Buzan
    Research
      On effectiveness&amp;lt;br/&amp;gt;and features
      On Automatic creation
        Uses
            Creative techniques
            Strategic planning
            Argument mapping
    Tools
      Pen and paper
      Mermaid
"""

VIDEO = https://www.youtube.com/watch?v=p9Q5a1Vn-Hk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After VoxScript provides the transcript for context, ChatGPT should return something that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mindmap
  root((Inside OpenAI))
    OpenAI Origins
      ChatGPT &amp;amp; Dall-E
        Monster Hits
        Beat Tech Giants to Market
      Mira Marathi
        Chief Architect behind OpenAI's strategy
        AI that loves Humanity
        Challenges
          Hallucination
          Truthfulness
          Reliability
    AI Capabilities
      Neural Network
        Trained on massive data
        Predict next word in sentence
      ChatGPT
        Different way to understand information
        Intuitive searching vs keyword-based
      Misinformation
        AI can make up things (hallucinations)
        Need for awareness and not blindly relying on AI
    AI &amp;amp; Society
      Relationships with AI
        AI as a reliable tool
        AI's impact on jobs
          New jobs like prompt engineering
      AI's challenges
        Handling misinformation
        Addressing biases
    AI Gold Rush
      Venture Capitalists
        Investing in AI startups
      Blitzscaling
        Prioritizing speed over efficiency
      AI's potential
        Co-pilot for every profession
        Generative AI's broad touch
    Global Perspective
      AI's global impact
        Building industries of the future
      Responsible Innovation
        Need for trusted authority to audit AI systems
    AGI (Artificial General Intelligence)
      Risks
        Potential for human extinction
        Misalignment of goals between AI and humans
      Predictions
        Still far from AGI
        Powerful systems in the future
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simply copy and paste it to &lt;a href="https://mermaid.live/" rel="noopener noreferrer"&gt;mermaid.live&lt;/a&gt; and it will generate your mind map:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lriehom5m82pfkecbxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lriehom5m82pfkecbxh.png" alt="Mind Map of Video" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it! With a plugin and a simple prompt we can create meaningful mind map representations of complex ideas in YouTube videos.&lt;/p&gt;

&lt;p&gt;I encourage you to modify the prompt to see if you get better results and share those in the comments.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>productivity</category>
      <category>mermaid</category>
    </item>
    <item>
      <title>Run Models in the Browser With Transformers.js</title>
      <dc:creator>Chris McKenzie</dc:creator>
      <pubDate>Thu, 21 Sep 2023 21:32:06 +0000</pubDate>
      <link>https://forem.com/kenzic/run-models-in-the-browser-with-transformersjs-7g8</link>
      <guid>https://forem.com/kenzic/run-models-in-the-browser-with-transformersjs-7g8</guid>
      <description>&lt;p&gt;For this tutorial we’ll be using transformers.js from Hugging Face. If you’re not familiar with Hugging Face, I highly recommend you check them out. They’re doing some really cool stuff in the AI space. The TLDR is that they’re the GitHub of ML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transformers.js
&lt;/h2&gt;

&lt;p&gt;Based on the Python library — transformers — &lt;a href="https://huggingface.co/docs/transformers.js" rel="noopener noreferrer"&gt;transformers.js&lt;/a&gt; is a JavaScript library that allows you to use pretrained models from Hugging Face in a JavaScript environment — such as the browser. While it’s not as robust as the Python library, it’s still pretty powerful.&lt;/p&gt;

&lt;p&gt;The library is versatile, supporting tasks such as Natural Language Processing (NLP), Computer Vision, Audio, and Multimodal processing.&lt;/p&gt;

&lt;p&gt;Under the hood, Transformers.js uses the ONNX Runtime to run the models in the browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  ONNX Runtime
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;ONNX Runtime is a cross-platform machine-learning model accelerator, with a flexible interface to integrate hardware-specific libraries. ONNX Runtime can be used with models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://onnxruntime.ai/" rel="noopener noreferrer"&gt;ONNX&lt;/a&gt;, or Open Neural Network Exchange, was initially started as a collaboration between Microsoft and Facebook. The project is &lt;a href="https://github.com/onnx/onnx" rel="noopener noreferrer"&gt;open-source&lt;/a&gt;, and has grown to offer a universal format for machine learning models, ensuring seamless sharing and deployment across platforms.&lt;/p&gt;

&lt;p&gt;ONNX is what makes it possible to directly run models in a browser. This is pretty insane!&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Demo
&lt;/h2&gt;

&lt;p&gt;For this tutorial let’s build a simple app that summarizes text. Let’s start by setting up our environment. For this demo we’ll keep it as simple as possible, but you can use any framework you’d like.&lt;/p&gt;

&lt;p&gt;Complete project can be found on &lt;a href="https://github.com/kenzic/run-models-in-the-browser-with-transformers.js-demo/tree/basic-demo" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup project foundation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;transformersjs-demo  
&lt;span class="nb"&gt;cd &lt;/span&gt;transformersjs-demo  
&lt;span class="nb"&gt;touch &lt;/span&gt;index.html styles.css main.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s create a simple HTML page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;  
&lt;span class="nt"&gt;&amp;lt;html&lt;/span&gt; &lt;span class="na"&gt;lang=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;  
&lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;charset=&lt;/span&gt;&lt;span class="s"&gt;"UTF-8"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"viewport"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"width=device-width, initial-scale=1.0"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Summary Generator&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;link&lt;/span&gt; &lt;span class="na"&gt;rel=&lt;/span&gt;&lt;span class="s"&gt;"stylesheet"&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"styles.css"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"main.js"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"module"&lt;/span&gt; &lt;span class="na"&gt;defer&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;  
&lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;  
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"container"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;  
        &lt;span class="nt"&gt;&amp;lt;textarea&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"long-text-input"&lt;/span&gt; &lt;span class="na"&gt;placeholder=&lt;/span&gt;&lt;span class="s"&gt;"Enter your copy here..."&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/textarea&amp;gt;&lt;/span&gt;  
        &lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"generate-button"&lt;/span&gt; &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;  
          &lt;span class="nt"&gt;&amp;lt;span&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"spinner"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;🔄&lt;span class="nt"&gt;&amp;lt;/span&amp;gt;&lt;/span&gt; Generate Summary  
        &lt;span class="nt"&gt;&amp;lt;/button&amp;gt;&lt;/span&gt;  
        &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"output-div"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt;  
    &lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;  
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;  
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a little CSS to make it look nice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;box-sizing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;border-box&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nt"&gt;body&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;font-family&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Arial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;sans-serif&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#f4f4f4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;justify-content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;align-items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100vh&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nc"&gt;.container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#ffffff&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;box-shadow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0px&lt;/span&gt; &lt;span class="m"&gt;0px&lt;/span&gt; &lt;span class="m"&gt;15px&lt;/span&gt; &lt;span class="n"&gt;rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
  &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nt"&gt;textarea&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#ddd&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;resize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;block&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#3498db&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#ffffff&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;pointer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;18px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="nd"&gt;:hover&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#2980b9&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="nd"&gt;:disabled&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#b3c2c8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;not-allowed&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="k"&gt;@keyframes&lt;/span&gt; &lt;span class="n"&gt;spin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="err"&gt;0&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rotate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0deg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;  
  &lt;span class="err"&gt;100&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rotate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;360deg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nf"&gt;#spinner&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;margin-right&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;animation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;spin&lt;/span&gt; &lt;span class="m"&gt;1s&lt;/span&gt; &lt;span class="n"&gt;linear&lt;/span&gt; &lt;span class="n"&gt;infinite&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nf"&gt;#spinner&lt;/span&gt;&lt;span class="nc"&gt;.show&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inline-block&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;  

&lt;span class="nf"&gt;#output-div&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
  &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#f9f9f9&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#ddd&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
  &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8nhebjzea94bxnxk9p3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8nhebjzea94bxnxk9p3.png" alt="Basic UI" width="603" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Selecting a Model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You should take some time to understand the different models, what they’re used for, and their tradeoffs. When picking a model, it’s important to not just consider the task, but also consider size, speed and accuracy. For this I recommend you check out the &lt;a href="https://huggingface.co/models" rel="noopener noreferrer"&gt;model hub&lt;/a&gt; on Hugging Face. You can convert your models to run in an ONNX runtime, but that’s outside the scope of this tutorial, however, if you’re curious about that checkout &lt;a href="https://huggingface.co/docs/transformers.js/custom_usage#convert-your-models-to-onnx" rel="noopener noreferrer"&gt;Convert your models to ONNX&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this demo we’ll be using the &lt;code&gt;Xenova/t5-small&lt;/code&gt; &lt;a href="https://huggingface.co/Xenova/t5-small" rel="noopener noreferrer"&gt;model&lt;/a&gt;, which is a small model that can be used for summarization. This model is base off of &lt;code&gt;[t5-small](https://huggingface.co/t5-small)&lt;/code&gt; built with ONNX weights to be compatible with transformers.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Adding JavaScript&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now let’s add some JavaScript to make it work. The first thing we want to do is import transformers.js.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://cdn.jsdelivr.net/npm/@xenova/transformers@2.3.0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re using modern JavaScript we can import it directly from a CDN in the browser, but you can also &lt;code&gt;npm install&lt;/code&gt; it.&lt;/p&gt;

&lt;p&gt;Next we’ll use the pipeline function to create a summarization pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;summarization&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;  
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;summarization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// task  &lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Xenova/t5-small&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="c1"&gt;// model  &lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline function takes two arguments, the task and the model. We’re using the &lt;code&gt;summarization&lt;/code&gt; task and the &lt;code&gt;Xenova/t5-small&lt;/code&gt; model, but there are many other permutations you can use. For example, you could use the &lt;code&gt;question-answering&lt;/code&gt; task and the &lt;code&gt;deepset/roberta-base-squad2&lt;/code&gt; model to create a question answering pipeline. Checkout the &lt;a href="https://huggingface.co/docs/transformers.js/index#tasks" rel="noopener noreferrer"&gt;Tasks&lt;/a&gt; section for all the different tasks you can use.&lt;/p&gt;

&lt;p&gt;So far we have a nice UI and we’ve created a summarization pipeline. Now let’s add some code to make it work. Below the import but above the pipeline function, add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;longTextInput&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;long-text-input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;generateButton&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;generate-button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;output-div&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;spinner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;spinner&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we’ll want to add some code to kick off the summarization task when the super clicks the “Generate Summary” button. Below the pipeline function, add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;generateButton&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;longTextInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;summarization&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
    &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;summary_text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ideally, we don’t want our “Generate Summary” button to be clickable until the model is loaded. Depending on the model you pick, and the internet speed, this could take anywhere from a few seconds to over a minute. So let’s add a little code to disable the button until the model is loaded.&lt;/p&gt;

&lt;p&gt;Update the button html to include the &lt;code&gt;disabled&lt;/code&gt; attribute.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"generate-button"&lt;/span&gt; &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we’ll want to enable the button once the model is loaded: Below the pipeline function, add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;generateButton&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;removeAttribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;disabled&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we’ll add a simple loading spinner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;generateButton&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
    &lt;span class="nx"&gt;spinner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;classList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;show&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
    &lt;span class="nx"&gt;generateButton&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setAttribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;disabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;longTextInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;summarization&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  

    &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;summary_text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="nx"&gt;spinner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;classList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;show&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
    &lt;span class="nx"&gt;generateButton&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;removeAttribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;disabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
    &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;display&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;block&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Testing It Out&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For this demo, I’m going to summarize the article &lt;a href="https://www.npr.org/sections/health-shots/2023/09/16/1199924303/chatgpt-ai-medical-advice" rel="noopener noreferrer"&gt;‘Dr. Google’ meets its match in Dr. ChatGPT&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpa1g7htmu2n8tlkt4zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpa1g7htmu2n8tlkt4zv.png" alt="Image description" width="601" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we get a summary of the article, but it’s too short. Let’s update the code to get a longer summary. Add an options object to the pipeline function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;summarization&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
    &lt;span class="na"&gt;min_length&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;max_length&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;250&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running it again we see we get a longer summary. But it’s still not perfect. The accuracy leaves something to be desired. Let’s try a different model. Let’s try the &lt;code&gt;t5-base&lt;/code&gt; model. Replace the pipeline with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;summarization&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;summarization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Xenova/t5-base&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7cnhcja9k0ksy5ls6g6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7cnhcja9k0ksy5ls6g6.png" alt="T5 Base Model" width="602" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a little better, but ultimately you’ll need to decide which model is best for your use case.&lt;/p&gt;

&lt;p&gt;Personally, I found &lt;code&gt;Xenova/bart-large-cnn&lt;/code&gt; and &lt;code&gt;Xenova/distilbart-cnn-6–6&lt;/code&gt; produced the best results, but were the slowest and required downloading over a GB of data. This is something to keep in mind when selecting a model. You’ll want to balance accuracy with speed and size.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;summarization&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;summarization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Xenova/distilbart-cnn-6-6&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  

&lt;span class="c1"&gt;// or  &lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;summarization&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;summarization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Xenova/bart-large-cnn&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa82tlqy443iwgc7uj7ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa82tlqy443iwgc7uj7ry.png" alt="Bart Large CNN Model" width="600" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;There’s a lot of exciting stuff happening right now in the ML space. And the fact that we can run these models in the browser is pretty insane. It’s not perfect, and there are some considerations you need to take into account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It’s likely going to be slower than a server-side solution&lt;/li&gt;
&lt;li&gt;  Because the size of the model matters, small models are generally less accurate&lt;/li&gt;
&lt;li&gt;  Balancing the tradeoffs between speed, size and accuracy means test different models to find the right one for your use case&lt;/li&gt;
&lt;li&gt;  You’ll need to consider the security implications of running these models in the browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall this is a pretty exciting development. And it’s not just the browser. You can run these models in Node.js, Deno, React Native, and even in a serverless environment like Cloudflare Workers.&lt;/p&gt;

&lt;p&gt;I’m excited to see what people build with this technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://huggingface.co/docs/transformers.js/index" rel="noopener noreferrer"&gt;Transformers.js&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://github.com/kenzic/run-models-in-the-browser-with-transformers.js-demo/tree/basic-demo" rel="noopener noreferrer"&gt;Project Repo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://onnxruntime.ai/" rel="noopener noreferrer"&gt;ONNX Runtime&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://github.com/onnx/onnx" rel="noopener noreferrer"&gt;ONNX Github Repo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
