<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Fahmi Noor Fiqri</title>
    <description>The latest articles on Forem by Fahmi Noor Fiqri (@fahminlb33).</description>
    <link>https://forem.com/fahminlb33</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/fahminlb33"/>
    <language>en</language>
    <item>
      <title>Ritsu-Pi EmailOps: Homelab Management via Email powered by Postmark👓</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Wed, 04 Jun 2025 09:53:20 +0000</pubDate>
      <link>https://forem.com/fahminlb33/ritsu-pi-emailops-homelab-management-via-email-powered-by-postmark-4ean</link>
      <guid>https://forem.com/fahminlb33/ritsu-pi-emailops-homelab-management-via-email-powered-by-postmark-4ean</guid>
      <description>&lt;p&gt;This is a submission for the &lt;a href="https://dev.to/challenges/postmark"&gt;Postmark Challenge: Inbox Innovators&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ritsu-Pi EmailOps&lt;/strong&gt; is a lightweight homelab automation system that lets you control Docker containers and monitor system health entirely via email.&lt;/p&gt;

&lt;p&gt;You can send natural language commands like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Subject: Check system metrics&lt;br&gt;
Body: Check how much disk space is left on /mnt/data&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And Ritsu-Pi will execute the request, send a structured reply, and leave a secure audit trail — all powered by Postmark’s Inbound Webhook API and an agentic AI using Semantic Kernel.&lt;/p&gt;

&lt;p&gt;It’s designed for privacy-minded or remote homelab users who want a secure, script-free ops layer that works over a protocol they already trust: email.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧪 Demo
&lt;/h2&gt;

&lt;p&gt;Real time demo sending and receiving email. The receive part is a bit slower because I used ProtonMail Bridge and Thunderbird.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/rJEkGNjPWus"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Below is some examples of emails sent via Postmark.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8s45144kuqf9xfeknk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8s45144kuqf9xfeknk6.png" alt="Sample emails"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can send emails asking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Listing all available docker containers&lt;/li&gt;
&lt;li&gt;Start/stop/restart docker containers&lt;/li&gt;
&lt;li&gt;Get current system resource status&lt;/li&gt;
&lt;li&gt;Get historical CPU/memory usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The web API running in a Docker container viewed from Portainer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tk73cnxlaf6nfq2mx2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tk73cnxlaf6nfq2mx2f.png" alt="Running in Docker container"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  💻 Code Repository
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/ritsu-pi-emailops" rel="noopener noreferrer"&gt;
        ritsu-pi-emailops
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Ritsu-Pi EmailOps - Email-driven DevOps for your Raspberry Pi stack.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;📬 Ritsu-Pi EmailOps&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Secure homelab management via email.&lt;/strong&gt;&lt;br&gt;
A Postmark-powered EmailOps module for the &lt;a href="https://github.com/fahminlb33/ritsu-pi" rel="noopener noreferrer"&gt;Ritsu-Pi&lt;/a&gt; homelab automation stack.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;✉️ Send commands from your inbox&lt;/li&gt;
&lt;li&gt;🐳 Manage Docker containers&lt;/li&gt;
&lt;li&gt;🔐 Secure, auditable, and remote&lt;/li&gt;
&lt;li&gt;🧠 Built with ASP.NET Core (.NET 9), EFCore, and SQLite&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 What is Ritsu-Pi EmailOps?&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Ritsu-Pi EmailOps lets you control your homelab Docker containers via email commands, securely processed through Postmark's Inbound Webhook. It’s built for Raspberry Pi and self-hosters who want a zero-UI, minimal-attack-surface, out-of-band control plane.&lt;/p&gt;
&lt;p&gt;This project is a feature module of &lt;a href="https://github.com/yourusername/ritsu-pi" rel="noopener noreferrer"&gt;Ritsu-Pi&lt;/a&gt;, an open-source platform for managing self-hosted services with automation and observability in mind.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;✨ Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;📩 Email-based command interface (via Postmark)&lt;/li&gt;
&lt;li&gt;🐳 Docker management: start, stop, restart, status&lt;/li&gt;
&lt;li&gt;💻 System monitoring: CPU/memory/disk usage from Prometheus&lt;/li&gt;
&lt;li&gt;🔒 Allowlist-based email sender authentication&lt;/li&gt;
&lt;li&gt;🧾 Audit log with command history in SQLite&lt;/li&gt;
&lt;li&gt;📥 Auto-response email replies with execution results&lt;/li&gt;
&lt;li&gt;🖥️ Designed for headless, always-on…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/ritsu-pi-emailops" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  🧩 How I Built It
&lt;/h2&gt;

&lt;p&gt;Ritsu-Pi EmailOps was built as an extension of my homelab automation system, Ritsu-Pi, with the goal of enabling reliable, low-friction system control using just email.&lt;/p&gt;

&lt;p&gt;The core idea: let users manage Docker containers and monitor system health through natural language requests sent via email, with secure execution and auditable replies — no custom apps or SSH needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  🏗️ Architecture Overview
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp27nhynawv4hbu8jnq4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp27nhynawv4hbu8jnq4o.png" alt="Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The app is built using ASP.NET Core Web API along with the integration of Postmark API and Gemini API for agentic AI support with Semantic Kernel.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧠 Challenges Solved
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No mobile app or SSH required: I can trigger server actions securely via email from anywhere.&lt;/li&gt;
&lt;li&gt;Audit trail built-in: Every command and response is recorded in the inbox and logs.&lt;/li&gt;
&lt;li&gt;Natural language flexibility: Thanks to the agentic layer, I don’t need to remember CLI syntax.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📨 Postmark Integration Challenges
&lt;/h3&gt;

&lt;p&gt;One small challenge was that Postmark requires the &lt;code&gt;From&lt;/code&gt; address to match a verified domain. Since I used kodesiana.com, I couldn’t send emails from other domains during testing. I had to adjust the flow to always use a verified sender identity.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Conclusion
&lt;/h2&gt;

&lt;p&gt;Ritsu-Pi EmailOps reimagines system administration through the lens of simplicity, security, and accessibility — all powered by email.&lt;/p&gt;

&lt;p&gt;By combining the universality of email, the reliability of Postmark, and the flexibility of agentic AI, I’ve built a minimal yet powerful platform for homelab control that doesn’t depend on dashboards, SSH, or vendor lock-in.&lt;/p&gt;

&lt;p&gt;Whether you're restarting a container on the go, checking disk usage from a smartwatch, or just want your server to answer with a human-readable reply — this project shows that EmailOps can be both practical and surprisingly elegant.&lt;/p&gt;

&lt;p&gt;Looking ahead, the next evolution of this project is to turn it into a &lt;strong&gt;full-fledged platform engineering tool&lt;/strong&gt; — enabling teams to manage infrastructure, request deployments, approve changes, and resolve incidents using natural language over email. A lightweight, inbox-native control plane for DevOps and SRE workflows.&lt;/p&gt;

&lt;p&gt;Thanks to Postmark and DEV for the opportunity — and to everyone who still believes email is the ultimate universal interface.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>postmarkchallenge</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>Authorization in MCP Server with Permit.io - A JIRA-like Task Management API</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 04 May 2025 08:15:54 +0000</pubDate>
      <link>https://forem.com/fahminlb33/authorization-in-mcp-server-with-permitio-a-jira-like-task-management-api-492l</link>
      <guid>https://forem.com/fahminlb33/authorization-in-mcp-server-with-permitio-a-jira-like-task-management-api-492l</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/permit_io"&gt;Permit.io Authorization Challenge&lt;/a&gt;: AI Access Control&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🦄 What I Built
&lt;/h2&gt;

&lt;p&gt;I built a prototype task management API, like JIRA. You can keep track of Epics, Tasks, and Comments just like in JIRA. The app is not just a REST API, I also built a working MCP server to interact with the API.&lt;/p&gt;

&lt;p&gt;With Permit.io, the authorization is seamless between the Hono REST API and MCP server. Cutting the development time and simplifies the policy configuration.&lt;/p&gt;

&lt;p&gt;This project should fit into the &lt;strong&gt;AI Access Control&lt;/strong&gt; and &lt;strong&gt;API-First Authorization&lt;/strong&gt; category.&lt;/p&gt;

&lt;h3&gt;
  
  
  💻 Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hono&lt;/strong&gt; for building REST API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP SDK&lt;/strong&gt; for building the MCP server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite&lt;/strong&gt; as the database&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🐋 Demo
&lt;/h2&gt;

&lt;p&gt;Below is a demo when invoking the Hono REST API and MCP server using inspector.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/pV6MUEon0Ko"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;When using the inspector, you can ask for a session code (akin to the JWT for authentication in the REST API) to access the resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyxa012ymsbyo4eq0762.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyxa012ymsbyo4eq0762.png" alt="MCP Inspector"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The session token will be sent to a webhook, you can use services such as &lt;a href="https://webhook.site/" rel="noopener noreferrer"&gt;Webhook.site&lt;/a&gt; to receive the session code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy4qsvigyaqij7wb9ejb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy4qsvigyaqij7wb9ejb.png" alt="Webhook"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The session code is formatted as &lt;code&gt;SES-xxxxxx&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  🤠 Next: Claude Desktop
&lt;/h3&gt;

&lt;p&gt;I have tried to use the MCP server in Claude Desktop, and it worked... a little. I can login and get a session code but currently the MCP server is having some trouble to call the PDP API in the same Docker network.&lt;/p&gt;

&lt;p&gt;I'll look into it in the future to troubleshoot the network communications between the MCP server container and PDP server.&lt;/p&gt;

&lt;h2&gt;
  
  
  💻 Project Repo
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/devto-permitio-mcp" rel="noopener noreferrer"&gt;
        devto-permitio-mcp
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple JIRA-like task management with Permit.io authorization (Hono + MCP)
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;DEV.to Permit.io Authorization Challenge&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;This is a simple JIRA-like task management API, you can expect feature such as Epics, Tasks, and Comments just like in JIRA.&lt;/p&gt;
&lt;p&gt;The key point of this project is this repo contains two APIs, (1) a classic REST API using Hono.js and (2) an MCP using the MCP TypeScript SDK.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Permit.io Allowed for Easy Authorization&lt;/h3&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;REST API with Hono&lt;/h4&gt;
&lt;/div&gt;
&lt;p&gt;In traditional web API, implementing Permit.io authorization as a middleware simplifed and also centralized the authorization process, no more messy authorization check on every endpoints.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;MCP Server&lt;/h4&gt;

&lt;/div&gt;
&lt;p&gt;The same authorization technique in the web API can easily be reused in MCP server (with some changes). With Permit.io, the authorization process is "framework-agnostic" so I can effectively implement the authorization process with any framework easily.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Development Setup&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;To run this project, you will need &lt;strong&gt;NodeJS 23&lt;/strong&gt; and Docker.&lt;/p&gt;
&lt;p&gt;Follow these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Clone this repo &lt;code&gt;git clone https://github.com/fahminlb33/devto-permitio-mcp.git&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Install…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/devto-permitio-mcp" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  🛣️ My Journey
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🤔 Understanding the Permit.io Concepts
&lt;/h3&gt;

&lt;p&gt;Resources, Roles, are Instance Roles the top three concepts to master in this project. In this project, there are 4 resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User&lt;/li&gt;
&lt;li&gt;Epic&lt;/li&gt;
&lt;li&gt;Task&lt;/li&gt;
&lt;li&gt;Comment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;with this relationship:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Epic&lt;/code&gt; is a parent of &lt;code&gt;Task&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Task&lt;/code&gt; is a parent of &lt;code&gt;Comment&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also created 3 different roles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt;, can do everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manager&lt;/strong&gt;, can do most of the things like creating, editing, and deleting Epic and Task&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer&lt;/strong&gt;, can be assigned/unassigned tasks, leave comments, and report status/time spent on a task&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In conclusion, I implemented RBAC and ReBAC authorization for the APIs. Next, I configured the roles and actions in the Permit UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧰 Policy Configuration
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuk4fllkdcuzlm7bdkuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuk4fllkdcuzlm7bdkuw.png" alt="Policy editor 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0yx20p3q2g8fyqxlqui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0yx20p3q2g8fyqxlqui.png" alt="Policy editor 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🔐 API-First Authorization
&lt;/h3&gt;

&lt;p&gt;Implementing authorization in Hono is as simple as creating a middleware.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nf"&gt;except&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/public/*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jwtPayload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jwtPayload&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;jwtPayload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Unauthorized&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;HttpStatus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Unauthorized&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;notFound&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getResourceActionFromReq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;jwtPayload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;permitted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;permit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;check&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jwtPayload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;permitted&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Permission denied&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;HttpStatus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Forbidden&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🤖 Authorization for AI Applications with Permit.io
&lt;/h3&gt;

&lt;p&gt;Authorization in the MCP SDK essenstialy the same, but with a custom user lookup. Also, since the MCP SDK does not have the concept of middleware, this &lt;code&gt;authorizeTool&lt;/code&gt; function wraps the tool handler with the authorization routine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;authorizeTool&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ZodRawShape&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;cb&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;objectOutputType&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ZodTypeAny&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UserRole&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;CallToolResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;ToolCallback&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// @ts-ignore&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;objectOutputType&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ZodTypeAny&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;extra&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RequestHandlerExtra&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ServerRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ServerNotification&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;CallToolResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// query the user data&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sessionsTable&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;innerJoin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usersTable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usersTable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sessionsTable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sessionsTable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sessionCode&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MessageConstants&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Forbidden&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// authorize with Permit.io&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;permitted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;permit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;check&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resourceName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;permitted&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MessageConstants&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Forbidden&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;cb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;UserRole&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🚧 Challenges
&lt;/h3&gt;

&lt;p&gt;Getting used to the concepts of user, resource instance, role assignment, etc. is quite challenging at first, but when I started to tinker with the Permit.io App, it was quite easy to understand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReBAC&lt;/strong&gt; require the use of local PDP server. I find out the hard way when my PDP check always failed. When testing with Postman to the cloud PDP endpoint, the response indicated that to use ReBAC you have to use the local PDP.&lt;/p&gt;

&lt;p&gt;Some bulk operations are not yet available in the &lt;code&gt;permitio&lt;/code&gt; npm package such as the bulk delete resource instances. Not a problem though since the Redoc API documentation has a thorough example for the API.&lt;/p&gt;

&lt;h2&gt;
  
  
  🐠 Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrated the flexibility of Permit.io for authorization in a classic REST API using Hono library and MCP SDK.&lt;/p&gt;

&lt;p&gt;Permit.io allowed seamless and flexible authorization for both classic REST APIs and MCP server. With a wide range of authorization policies, enforcing auth policy is as simple as calling &lt;code&gt;permit.check()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can check the GitHub repo for more information. You can access the source code, Postman collection, and tutorial on how to run the project and test it yourself.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>permitchallenge</category>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>Trading Signal from Sentiment Analysis using Bright Data API</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 29 Dec 2024 09:54:56 +0000</pubDate>
      <link>https://forem.com/fahminlb33/trading-signal-from-sentiment-analysis-using-bright-data-api-4nci</link>
      <guid>https://forem.com/fahminlb33/trading-signal-from-sentiment-analysis-using-bright-data-api-4nci</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/brightdata"&gt;Bright Data Web Scraping Challenge&lt;/a&gt;: Most Creative Use of Web Data for AI Models&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now days, we can easily find many open-source trading bots to automate trading activities in hope to gaining profits. In this article, I will share my latest project, making use of &lt;strong&gt;Bright Data Web Scraper API&lt;/strong&gt; and open-source LLM to create a simple trading signal dashboard.&lt;/p&gt;

&lt;p&gt;The idea itself is not new and there are many trading bots that can use publicly available data from news and social media to create trading signal. Heck, even a goldfish could turn more profit trading stock than people in &lt;code&gt;r/WallStreetBets&lt;/code&gt; as demonstrated in &lt;a href="https://www.youtube.com/watch?v=USKD3vPD6ZA" rel="noopener noreferrer"&gt;Michael Reeves video&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a dashboard where you can view a so called "trading signal" from sentiment analysis of various news source. In this project, I used data from BBC, CNN, and Reuters as sentiment source. Then, I used Yahoo Finance to get the stock (AAPL, META, MSFT, NVDA) historical data. I also used Ollama and Llama 3.1 to predict the sentiment from the news.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxefxku7dknlitwogurry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxefxku7dknlitwogurry.png" alt="Project flowchart" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall, the process is not complicated. Get the data, perform sentiment analysis, and overlay the sentiment in the stock price historical plot. I will use Streamlit and Plotly to plot the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;You can check the &lt;a href="https://devto-brightdata-scraping-sentiment-tfk54zvzudscscegjxdsxw.streamlit.app/" rel="noopener noreferrer"&gt;web app here&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping-sentiment" rel="noopener noreferrer"&gt;
        devto-brightdata-scraping-sentiment
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Bright Data Hackathon: Trading Signal using Sentiment Analysis&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;This repo contains the source code for my submission for &lt;a href="https://dev.to/devteam/join-us-for-the-bright-data-web-scraping-challenge-3000-in-prizes-3mg2?bb=196803" rel="nofollow"&gt;Bright Data Web Scraping Hackathon at DEV.to&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Setup&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Use &lt;code&gt;uv&lt;/code&gt; to install dependencies. Clone this repo and run &lt;code&gt;uv sync&lt;/code&gt; to install the packages.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Running the Project&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Trigger data collection API to scrape the news from multiple sources.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;python scripts/scrape_api.py --api-key YOUR_API_KEY discover --output-file ./data/snapshot-bbc.jsonl --keywords &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;apple,facebook meta,microsoft,nvidia&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --engine bbc
python scripts/scrape_api.py --api-key YOUR_API_KEY discover --output-file ./data/snapshot-cnn.jsonl --keywords &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;apple,facebook meta,microsoft,nvidia&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --engine cnn
python scripts/scrape_api.py --api-key YOUR_API_KEY discover --output-file ./data/snapshot-reuters.jsonl --keywords &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;apple,facebook meta,microsoft,nvidia&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --engine reuters&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Copy the contents of all 3 snapshot files into one, then download the scraped data.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;python scripts/scrape_api.py --api-key YOUR_API_KEY download --snapshots-file ./data/snapshot-all.jsonl --output-path ./data/scraped&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then, run these notebooks in order:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping-sentiment./notebooks/merge.ipynb" rel="noopener noreferrer"&gt;merge.ipynb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping-sentiment./notebooks/eda-stock-data.ipynb" rel="noopener noreferrer"&gt;eda-stock-data.ipynb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping-sentiment./notebooks/eda-llm-extraction.ipynb" rel="noopener noreferrer"&gt;eda-llm-extraction.ipynb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping-sentiment./notebooks/eval.ipynb" rel="noopener noreferrer"&gt;eval.ipynb&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note: You will also need Ollama with Llama 3.1 to run the LLM extraction notebook.&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/devto-brightdata-scraping-sentiment" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;Here's what you can explore in the web app:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trading Signal&lt;/strong&gt;, in this page you can see when a news articles are posted and its sentiment analysis result. In general, if the article provides a hopeful/positive opinion, the sentiment value will be +1 and -1 otherwise.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Green = positive outlook/bullish.&lt;/li&gt;
&lt;li&gt;Red = negative outlook/bearish.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajw3ww02h85f5skhwt2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajw3ww02h85f5skhwt2j.png" alt="Trading signal page" width="723" height="796"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we want is a green dot, followed by a rising stock price and vice versa, red dot followed by falling stock price. But as we can see above, it is not always the case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sentiment Analysis&lt;/strong&gt;, in this page you can see some statistics from the sentiment analysis process, and you can also see the news headline and its corresponding sentiment. Sometimes, the LLM cannot reliably classify the sentiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ur053cxt19gk9vxrd9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ur053cxt19gk9vxrd9y.png" alt="Sentiment analysis page" width="745" height="778"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Bright Data
&lt;/h2&gt;

&lt;p&gt;I mainly used the &lt;strong&gt;Web Scraper API&lt;/strong&gt; from Bright Data to collect the news articles. Bright Data has a wide selection of supported website for scraping, and thus greatly streamline the modelling process in this project. I only used a small subset of news website offered by Bright Data and I can already get a working prototype. We can definitely expand this project by adding more website, maybe even &lt;code&gt;r/WallStreetBets&lt;/code&gt; to get more trading signals.&lt;/p&gt;

&lt;p&gt;Even if the website you want to scrape is not available in the Web Scraper API, you can always create your own script or use the &lt;strong&gt;Scraping Browser&lt;/strong&gt; service to build your very own data collection script. I actually did this for my &lt;a href="https://dev.to/fahminlb33/web-scraping-for-product-analysis-and-price-comparison-1o9j"&gt;first submission&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prize Categories
&lt;/h3&gt;

&lt;p&gt;Although I filled the hackathon category for the third prompt, I believe this project could fall into the second prompt too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This is a really unexpected project. Originally, I planned to submit just one project, but in the last minute, I got inspired from watching Michael Reeves video and helping my friend finishing his research thesis, also using stock market data. Also, I can't believe I finished this project in less than 8 hours.&lt;/p&gt;

&lt;p&gt;I will definitely will not finish this project fast if I have to manually create the scraping script from scratch and waiting for the data. Thanks to Bright Data Web Scraper API, I can quickly get the data I need.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>brightdatachallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Web Scraping for Product Analysis and Price Comparison</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 22 Dec 2024 13:07:07 +0000</pubDate>
      <link>https://forem.com/fahminlb33/web-scraping-for-product-analysis-and-price-comparison-1o9j</link>
      <guid>https://forem.com/fahminlb33/web-scraping-for-product-analysis-and-price-comparison-1o9j</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/brightdata"&gt;Bright Data Web Scraping Challenge&lt;/a&gt;: Most Creative Use of Web Data for AI Models&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Product research plays an important role in market research, search engine optimization, and for me personally, to find the best price for a product that I want to buy. For some time, I have been looking into the E-Katalog LKPP, a government-controlled online marketplace. This marketplace supposedly providing government bodies, schools, and institutions access to all kinds of products, from stationery, laptops, and many more.&lt;/p&gt;

&lt;p&gt;One of my family members owns a laptop bought from this marketplace and, oh boy, it was crappy. It was a laptop with an obscure brand, and it was crazy expensive compared to another brand with the same price tag.&lt;/p&gt;

&lt;p&gt;So, I turned my interest in comparing the product prices between LKPP and other online marketplace to find if there was a significant difference.&lt;/p&gt;

&lt;p&gt;In this post, I will tell you how I used Bright Data platform to scrape the LKPP website using Scraping Browser and the Web Scraping API to collect products data from the online marketplace for comparison.&lt;/p&gt;

&lt;p&gt;Let’s dig in!&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a dashboard where you can explore product statistics from multiple marketplace (LKPP, Tokopedia, Lazada) and compare them. Also, with the power of open source LLMs, we can cluster the products to uncover interesting relationships.&lt;/p&gt;

&lt;p&gt;Overall, we can divide the process into several steps, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoolwwaqn2sr1xascd27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoolwwaqn2sr1xascd27.png" alt="Development pipeline" width="800" height="1524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, I used the Scraping Browser to collect data from E-Katalog LKPP, then, using this data, I extracted popular product keywords for search in two other marketplaces, namely Tokopedia and Lazada. For this case, I used the Web Scraping API as a convenient way to collect the products data.&lt;/p&gt;

&lt;p&gt;After we have the data from three different sources, I used Ollama + Llama 3.1 model and DSPy to extract structured data (processor, memory, and storage) from the scraped product description. We will also use an embedding model to create text embedding and then cluster the data to explore similar products in the marketplaces.&lt;/p&gt;

&lt;p&gt;Finally, I used Streamlit to deploy the app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;You can access the &lt;a href="https://devto-brightdata-scraping-dswwyvww6clkj5q2wu7w7b.streamlit.app" rel="noopener noreferrer"&gt;web app here&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping" rel="noopener noreferrer"&gt;
        devto-brightdata-scraping
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Bright Data Hackathon&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;Demo: &lt;a href="https://devto-brightdata-scraping-dswwyvww6clkj5q2wu7w7b.streamlit.app" rel="nofollow noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This repo contains the source code for my submission for &lt;a href="https://dev.to/devteam/join-us-for-the-bright-data-web-scraping-challenge-3000-in-prizes-3mg2?bb=196803" rel="nofollow"&gt;Bright Data Web Scraping Hackathon at DEV.to&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Setup&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Use &lt;code&gt;uv&lt;/code&gt; to install dependencies. Clone this repo and run &lt;code&gt;uv sync&lt;/code&gt; to install the packages.&lt;/p&gt;
&lt;p&gt;Refer to the &lt;a href="https://github.com/fahminlb33/devto-brightdata-scraping./DOCS.md" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for a guideline how to use the scripts in this repo.&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/devto-brightdata-scraping" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;The Streamlit app is divided into four sections,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dashboard&lt;/strong&gt;, this section shows the product price distribution, the most popular brands, GPUs, and storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlg9st97cbjxg6hmcmko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlg9st97cbjxg6hmcmko.png" alt="Dashboard page" width="737" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keyword Explorer&lt;/strong&gt;, this section contains a basic keyword research tool based on N-gram frequencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbf954l24cxws6ehm4h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbf954l24cxws6ehm4h5.png" alt="Keyword Explorer page" width="775" height="820"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product Cloud&lt;/strong&gt;, this section shows a 3D product name cluster based on K-Means clustering. The points are pre-computed using T-SNE dimensionality reduction, and the embedding model used to generate the text embeddings is the Nomic Text Embed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F759uzt2loktrdovbtns6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F759uzt2loktrdovbtns6.png" alt="Product Cloud page" width="748" height="788"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compare Price&lt;/strong&gt;: In this section, you can enter a product name and it will show a comparison between the products in three different marketplaces, along with a statistical test (t-test).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakqper66hx0u3yr6r8hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakqper66hx0u3yr6r8hb.png" alt="Compare Price page" width="773" height="807"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Bright Data
&lt;/h2&gt;

&lt;p&gt;As described in previous sections, I mainly used Bright Data’s Scraping Browser and Web Scraping API services.&lt;/p&gt;

&lt;p&gt;Bright Data &lt;strong&gt;Scraping Browser&lt;/strong&gt; excels at unlocking access to any website with its powerful unblocking and proxy features. Even though the LKPP web uses CloudFlare protection, with Scraping Browser, the scraping process runs smoothly. I used Playwright for scraping and the integration process is just as simple as changing a single line,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# from this
&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chromium&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;headless&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;slow_mo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# to this
&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chromium&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect_over_cdp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wss://AUTH_HERE@brd.superproxy.io:9222&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;slow_mo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for the public marketplace data, namely Tokopedia and Lazada, Bright Data through their &lt;strong&gt;Web Scraping API&lt;/strong&gt; provides an intuitive and convenient API for scraping data, without requiring us to write a custom script for scraping. This saves me a lot of time so I can focus on analyzing the data and creating the Streamlit app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prize Categories
&lt;/h3&gt;

&lt;p&gt;Although I filled the hackathon category to the third prompt, I believe this project could fall into any of the categories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This has been an interesting journey, especially how we can leverage web scraping data and GenAI to extract structured information from the web. Bright Data’s powerful scraping browser and convenient web scraping API allow me to quickly build and collect a large amount of data in a very short time. This allows me to shift my focus on delivering insights from the scraped data and making web scraping process a breeze. No more CAPTCHA and creating a custom script for popular website.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>brightdatachallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AI Systematic Literature Review with KawanPaper</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 10 Nov 2024 07:18:03 +0000</pubDate>
      <link>https://forem.com/fahminlb33/systematic-literature-review-with-kawanpaper-110l</link>
      <guid>https://forem.com/fahminlb33/systematic-literature-review-with-kawanpaper-110l</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pgai"&gt;Open Source AI Challenge with pgai and Ollama &lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;This is a conversational RAG app where all the RAG pipelines are entirely built in PostgreSQL procedures using PL/pgSQL!&lt;/p&gt;

&lt;p&gt;The idea behind this app stems from my master's thesis work. I have to do systematic literature review and doing it manually is boring. So, I created this small app so I can just upload the full text paper and chat with it, create summaries, highlights, and key results. Massively streamlining the process of systematic literature review.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Of course, this app would work with any kind of data, we just need to change the system prompt a bit!😏&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarize research papers (journal articles, conference papers, etc.)&lt;/li&gt;
&lt;li&gt;Create highlights/key insights&lt;/li&gt;
&lt;li&gt;Automatic processing using pgai Vectorizer&lt;/li&gt;
&lt;li&gt;Chat with independent paper&lt;/li&gt;
&lt;li&gt;Save multiple chat sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially I want to fully use Ollama, but pgai Vectorizer currently do not support Ollama, so I opted to use Open AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blob.kodesiana.com/kodesiana-public-assets/devto/timescale-pgai-hackathon-2024/kawanpaper-demo.mp4" rel="noopener noreferrer"&gt;Demo video here&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/devto-timescale-pgai" rel="noopener noreferrer"&gt;
        devto-timescale-pgai
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;KawanPaper&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;KawanPaper is your go-to app for chatting mainly with research papers (journal articles, conference papers, etc.)&lt;/p&gt;
&lt;p&gt;Features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PDF upload and automatic parsing&lt;/li&gt;
&lt;li&gt;Generate key insights from research papers&lt;/li&gt;
&lt;li&gt;Chat with a specific paper&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Setup&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Make sure you have an up to date Docker instalation and then clone this repo. We will divide the installation process into 3 parts, minio setup, database migration, and launching the app.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Configuration&lt;/h3&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Main configuration: copy the &lt;code&gt;.env.example&lt;/code&gt; file to &lt;code&gt;.env&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Docker compose configuration: copy the &lt;code&gt;docker.env.example&lt;/code&gt; to &lt;code&gt;docker.env&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These config have a predefined values to make it easier to deploy. Note there are some env vars that we need to define:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;.env&lt;/code&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;VITE_MINIO_ACCESS_KEY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VITE_MINIO_SECRET_KEY&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;docker.env&lt;/code&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;OPENAI_API_KEY&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can add your Open AI key in the &lt;code&gt;docker.env&lt;/code&gt; and for the minio credentials, we will create one in the next step.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Minio Setup&lt;/h3&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;This is a new thing for me, back in the day we can…&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/devto-timescale-pgai" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Tools Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TimescaleDB&lt;/strong&gt; as the main database to store the documents and its embeddings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pgai&lt;/strong&gt; to access Open AI services in database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pgvector&lt;/strong&gt; to store document embeddings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pgvectorscale&lt;/strong&gt; to create indexes on the embeddings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pgai Vectorizer&lt;/strong&gt; to automatically create embeddings from the uploaded papers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prize Categories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vectorizer Vibe, All the Extensions&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL (TimescaleDB)&lt;/li&gt;
&lt;li&gt;Minio&lt;/li&gt;
&lt;li&gt;Remix&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So little tech stack for a RAG app😊 We can make it smaller by storing blobs in Postgres but I don't like that idea.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conversational RAG in PL/pgSQL
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://github.com/fahminlb33/devto-timescale-pgai/blob/master/migration.sql#L110" rel="noopener noreferrer"&gt;this SQL script&lt;/a&gt; I implemented two Postgres function to build the conversational RAG pipeline. This is the heart and soul of this app.&lt;/p&gt;

&lt;p&gt;I got the idea from &lt;a href="https://python.langchain.com/docs/tutorials/qa_chat_history/" rel="noopener noreferrer"&gt;this LangChain tutorial&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;FUNCTION&lt;/span&gt; &lt;span class="n"&gt;contextualize_question&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p_session_id&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;p_query&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;RETURNS&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;PROCEDURE&lt;/span&gt; &lt;span class="n"&gt;chat_with_paper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p_session_id&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;p_chat_content&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I never thought I would be writing LLM chain/pipeline using SQL instead of Haystack, LangChain, or LlamaIndex, but here we are!&lt;/p&gt;

&lt;p&gt;It's crazy what pgai could bring in the future for LLM in databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This has been an interesting journey because the idea of running LLM directly in database is really weird at first. But after learning it for the last 2 days, I found it really interesting and could possibly revolutionize data mining pipelines for non-AI engineers. I imagine data analysts and researchers could easily get insights from database systems without major changes to existing systems.&lt;/p&gt;

&lt;p&gt;One of my favorite experiences in this project is I learned how to write Postgres procedures and functions using PL/pgSQL. It was a really interesting journey especially to write LLM apps that used to be written using LangChain, Haystack, or LammaIndex now I implemented it using pure PL/pgSQL to build a conversational RAG.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>pgaichallenge</category>
      <category>database</category>
      <category>ai</category>
    </item>
    <item>
      <title>Recommend me a House🏡 RAG with Cloudflare AI🌤️</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 14 Apr 2024 07:54:46 +0000</pubDate>
      <link>https://forem.com/fahminlb33/recommend-me-a-house-rag-with-cloudflare-ai-2f94</link>
      <guid>https://forem.com/fahminlb33/recommend-me-a-house-rag-with-cloudflare-ai-2f94</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/devteam/join-us-for-the-cloudflare-ai-challenge-3000-in-prizes-5f99"&gt;Cloudflare AI Challenge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a Q&amp;amp;A chat app for house recommendation. The idea is simple, you can ask using text or &lt;strong&gt;image&lt;/strong&gt; about your dream house and the app will find the most relevant house listing stored on a Vectorize database. Currently, I have 100 house listing in Bogor, Indonesia.&lt;/p&gt;

&lt;p&gt;And you read that right, you can upload an image to perform &lt;em&gt;reverse image search&lt;/em&gt; or more accurately, a semantic search using an image embedding model😉&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Try it! &lt;a href="https://rumah-frontend.pages.dev"&gt;https://rumah-frontend.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example prompt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recommend me a house with 2 bedrooms&lt;/li&gt;
&lt;li&gt;House near Bojong Gede&lt;/li&gt;
&lt;/ul&gt;





&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/koderumah"&gt;
        koderumah
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;House Recommendation RAG&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Retrieval-Augmented Generation (RAG) for house recommendation.&lt;/p&gt;
&lt;p&gt;This project uses multiple AI models to perform QnA style house search/recommendation using RAG method. It's a more advanced use case of CloudFlare AI which integrates many CloudFlare services and AI models.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;@cf/meta/llama-2-7b-chat-int8&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;@cf/baai/bge-large-en-v1.5&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;@cf/unum/uform-gen2-qwen-500m&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.kaggle.com/models/google/mobilenet-v3/frameworks/tfLite/variations/small-100-224-feature-vector-metadata/versions/1?tfhub-redirect=true" rel="nofollow"&gt;mobilenet_v3&lt;/a&gt; through &lt;code&gt;@tensorflow/tfjs&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Try here: &lt;a href="https://rumah-frontend.pages.dev/" rel="nofollow"&gt;https://rumah-frontend.pages.dev/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Example propmpts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Recommend me a house with 2 bedrooms&lt;/li&gt;
&lt;li&gt;House near Bojong Gede&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Requirements&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Node v20.12.0&lt;/li&gt;
&lt;li&gt;npm v10.5.0&lt;/li&gt;
&lt;li&gt;Wrangler v3.0.0 or newer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You'll need CloudFlare Worker Pro Plan to be able to use Vectorize service which currently are in Beta.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Tech Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Vite&lt;/li&gt;
&lt;li&gt;React&lt;/li&gt;
&lt;li&gt;Radix UI&lt;/li&gt;
&lt;li&gt;Tailwind CSS&lt;/li&gt;
&lt;li&gt;zod&lt;/li&gt;
&lt;li&gt;itty-router&lt;/li&gt;
&lt;li&gt;jpeg-js&lt;/li&gt;
&lt;li&gt;@tensorflow/tfjs&lt;/li&gt;
&lt;li&gt;drizzle-orm&lt;/li&gt;
&lt;li&gt;CloudFlare services used: Pages, Workers, Workers AI, Vectorize, D1, R2&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Deployment&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Step 1: Clone this repo, install the npm packages, and create the necessary databases, buckets, and indexes.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; clone the repo&lt;/span&gt;
git clone https://github.com/fahminlb33/koderumah.git
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; install npm packages&lt;/span&gt;
npm install

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; create D1 databases&lt;/span&gt;
npx wrangler&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/koderumah"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Tech stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudFlare Workers, Pages, AI, Vectorize, D1, R2&lt;/li&gt;
&lt;li&gt;Backend: itty-router, zod, drizzle-orm, tensorflow.js&lt;/li&gt;
&lt;li&gt;Frontend: Remix, React, Radix UI&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Journey
&lt;/h2&gt;

&lt;p&gt;This is my third and final submission to the CloudFlare Hackathon. My previous submission was about &lt;a href="https://dev.to/fahminlb33/create-storybook-using-cloudflare-ai-26m7"&gt;creating a storybook&lt;/a&gt; and &lt;a href="https://dev.to/fahminlb33/whos-devto-writer-to-follow-ask-cloudflare-ai-35b8"&gt;dev.to author recommendation&lt;/a&gt;, now I’m focusing on LLM and RAG for Q&amp;amp;A.&lt;/p&gt;

&lt;p&gt;RAG: Retrieval-Augmented Generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building the RAG pipeline
&lt;/h3&gt;

&lt;p&gt;This time my idea was to build an AI assistant to give house recommendation based on text prompt. You can enter a prompt describing the house you want, for example, the number of bedrooms, bathrooms, etc. and then the model will give you house recommendations based on the house listing stored on the D1 database.&lt;/p&gt;

&lt;p&gt;There are three parts that make up the RAG pipeline.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Query agent&lt;/strong&gt;: this agent provides context or “memory” from earlier prompt, if exists. This produces a new “refined prompt,” hopefully with an added context from a previous chat.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search&lt;/strong&gt;: the refined prompt is then fed to a text embedding model and a vector search is performed to a Vectorize index, returning the most relevant document containing the house listing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Answer agent&lt;/strong&gt;: using the retrieved documents as context, this agent will then summarize and generate a final response to the users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, it is the usual RAG pipeline you’ll see on many tutorials on the internet. But can we improve it?&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompting by text is mainstream, what about image?
&lt;/h3&gt;

&lt;p&gt;I found using text prompts to be effective, but I wanted to explore if using an image as a query could enhance the experience.&lt;/p&gt;

&lt;p&gt;Currently, CloudFlare AI doesn’t have an image embedding model available. To solve this, I considered using a 3rd party service for image embedding. However, I recalled that TensorFlow has a JS version that could potentially run on a web worker.&lt;/p&gt;

&lt;p&gt;Initially, I faced difficulties in the image decoding process with TensorFlow.js because it is designed mainly for browsers, which have built-in image decoding capabilities. Fortunately, you &lt;strong&gt;can&lt;/strong&gt; decode an image using pure JS library such as &lt;code&gt;jpeg-js&lt;/code&gt; and run a TensorFlow.js model in a CloudFlare worker.&lt;/p&gt;

&lt;p&gt;BUT, it is &lt;strong&gt;slow&lt;/strong&gt;. Really slow...&lt;/p&gt;

&lt;p&gt;It takes about 5 seconds to perform a single image embedding. It is good enough for a prototype, but in the long run this will lead to bad UX. The bottleneck appears to be caused by workers needing to download a model and set up everything from scratch each time they run an image embedding process. Since each call to a Worker is isolated, I cannot cache the model for future inference.&lt;/p&gt;

&lt;p&gt;Now that we have got the embedding of our image, we can continue with semantic search and summarize the retrieved documents. This will enable us to generate a conclusive answer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe3idtv79hrqfa13zrnq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe3idtv79hrqfa13zrnq.jpg" alt="architecture diagram" width="670" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The models used are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple Models and Triple Task&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text Generation&lt;/strong&gt;: &lt;code&gt;@cf/meta/llama-2-7b-chat-int8&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text Summarization&lt;/strong&gt;: &lt;code&gt;@cf/facebook/bart-large-cnn&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text Embedding&lt;/strong&gt;: &lt;code&gt;@cf/baai/bge-large-en-v1.5&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image to Text&lt;/strong&gt;: &lt;code&gt;@cf/unum/uform-gen2-qwen-500m&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Embedding&lt;/strong&gt;: &lt;a href="https://www.kaggle.com/models/google/mobilenet-v3/frameworks/tfLite/variations/small-100-224-feature-vector-metadata/versions/1?tfhub-redirect=true"&gt;mobilenet_v3&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Compared to my previous submission, this app is definitely more intricate, but fun otherwise. I don’t even have to use LangChain to build this RAG pipeline. Overall, this project shows that CloudFlare AI, especially the Text Generation model quality is quite good for building RAG apps. The only major problem I faced on this project is the model hallucinations in the query agent, causing the responses to be reformulated into a question, not a statement. Maybe my system prompt is not optimal yet.&lt;/p&gt;

&lt;p&gt;The fact that we can also bring our own TensorFlow.js model to CloudFlare Worker is a major advantage, as it simplifies our system architecture and allows us to run nearly everything on CloudFlare Worker. But keep in mind the drawback I mentioned above😉&lt;/p&gt;

&lt;p&gt;Also, big thanks to my friend &lt;a class="mentioned-user" href="https://dev.to/rasyidf"&gt;@rasyidf&lt;/a&gt; for building the frontend app. I couldn’t do it without him.&lt;/p&gt;

</description>
      <category>cloudflarechallenge</category>
      <category>devchallenge</category>
      <category>ai</category>
    </item>
    <item>
      <title>Who's dev.to writer to follow?✍️ Ask CloudFlare AI</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 14 Apr 2024 07:12:54 +0000</pubDate>
      <link>https://forem.com/fahminlb33/whos-devto-writer-to-follow-ask-cloudflare-ai-35b8</link>
      <guid>https://forem.com/fahminlb33/whos-devto-writer-to-follow-ask-cloudflare-ai-35b8</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/devteam/join-us-for-the-cloudflare-ai-challenge-3000-in-prizes-5f99"&gt;Cloudflare AI Challenge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;An app that gives you insight into a dev.to user’s posting/articles and allows you to determine if it aligns with your preference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Try it! &lt;a href="https://devtofollow.pages.dev"&gt;https://devtofollow.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof7iclm77cddjb9nh4pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof7iclm77cddjb9nh4pt.png" alt="devtofollow app screenshot" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/devtofollow"&gt;
        devtofollow
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Dev.to Follow✍️&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A little confused who's user to follow in dev.to for your newsletter?&lt;/p&gt;

&lt;p&gt;This project uses an LLM to summarize and generate an insight about a user's post topics. Also, it can provide an insight about the user's posts and relevance to your prefereed topics, making it easy to choose which user's to follow for your blog feed. This project uses two models,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@cf/facebook/bart-large-cnn&lt;/code&gt; to summarize the post content&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@hf/mistral/mistral-7b-instruct-v0.2&lt;/code&gt; to paraphrase the post summaries and to generate post relevancy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try here: &lt;a href="https://devtofollow.pages.dev/" rel="nofollow"&gt;https://devtofollow.pages.dev/&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Requirements&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Node v20.12.0&lt;/li&gt;
&lt;li&gt;npm v10.5.0&lt;/li&gt;
&lt;li&gt;Wrangler v3.0.0 or newer&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Tech Stack&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Vite&lt;/li&gt;
&lt;li&gt;React&lt;/li&gt;
&lt;li&gt;Mantine&lt;/li&gt;
&lt;li&gt;CloudFlare services used: Pages, Workers AI&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Deployment&lt;/h2&gt;

&lt;/div&gt;

&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; clone the repo&lt;/span&gt;
git clone https://github.com/fahminlb33/devtofollow.git

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; change working directory&lt;/span&gt;
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; devtofollow

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; install dependencies&lt;/span&gt;
npm install

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; build repo&lt;/span&gt;
npm run build

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; deploy to CloudFlare Pages&lt;/span&gt;
npm run deploy&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After the site has been deployed, add Worker AI binding to the Pages…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/devtofollow"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  Journey
&lt;/h2&gt;

&lt;p&gt;I started this app in a creative fever, nearing the end of the challenge. I originally wanted to submit just one, but then new ideas started coming at the last minute, and now here we are.&lt;/p&gt;

&lt;p&gt;This time, my idea is to use LLM models to summarize a dev.to user posts and giving me insight about the key topics of his/her posts to determine whether his/her posts apply to my liking. This should help me pick which users to follow and align with my interests.&lt;/p&gt;

&lt;p&gt;The process starts with using the dev.to API to get the latest articles posted by a certain dev.to username, then I scraped the article content and summarized it using a BART model. Then, Mistral 7b further summarizes this summary, and if provided, the app also suggests whether the topics in those articles align with your preference.&lt;/p&gt;

&lt;p&gt;Models used:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Text Summarization&lt;/strong&gt;: &lt;code&gt;@cf/facebook/bart-large-cnn&lt;/code&gt; to summarize the post content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text Generation&lt;/strong&gt;: &lt;code&gt;@hf/mistral/mistral-7b-instruct-v0.2&lt;/code&gt; to paraphrase the post summaries and to generate post relevancy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Same as before, I built the web app using Vite, React, and Mantine and the backend using CloudFlare Page Functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;In this app, I performed a web scraping using CloudFlare Worker and one part of it is to extract text from HTML. Most libraries available at npm usually require DOM manipulation to extract the text, but in this case, we don’t have access to that. Fortunately, CloudFlare Worker runtime has another solution for this, the &lt;code&gt;HTMLRewriter&lt;/code&gt;. Originally, &lt;code&gt;HTMLRewriter&lt;/code&gt; is intended to transform HTML, not for data extraction. But fortunately, we can use it for extraction. Granted, this is the first time I used the API, but it is surprisingly simple to use. &lt;/p&gt;

&lt;p&gt;The future plan for this project is to integrate it with other CloudFlare services. For example, automatically scraping and summarizing an article using CRON triggers, and then sending it to an email or database. This way, we can have our own personalized newsfeed.&lt;/p&gt;

</description>
      <category>cloudflarechallenge</category>
      <category>devchallenge</category>
      <category>ai</category>
    </item>
    <item>
      <title>Create a storybook📚 using CloudFlare AI</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sat, 13 Apr 2024 09:24:23 +0000</pubDate>
      <link>https://forem.com/fahminlb33/create-storybook-using-cloudflare-ai-26m7</link>
      <guid>https://forem.com/fahminlb33/create-storybook-using-cloudflare-ai-26m7</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/devteam/join-us-for-the-cloudflare-ai-challenge-3000-in-prizes-5f99"&gt;Cloudflare AI Challenge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;An app to create a storybook/flipbook using LLM and stable diffusion models. You tell it a rough story and it will generate a story with illustrations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Try it! &lt;a href="https://storyflare.pages.dev/" rel="noopener noreferrer"&gt;https://storyflare.pages.dev/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example prompt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Story about photosynthesis for high school students&lt;/li&gt;
&lt;li&gt;Story describing renewable energy for kids&lt;/li&gt;
&lt;/ul&gt;





&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/storyflare" rel="noopener noreferrer"&gt;
        storyflare
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Storyflare🌤️&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Create storybook from prompt.&lt;/p&gt;
&lt;p&gt;This project uses an LLM to generate a story and a diffusion model to generate image illustrations. It's a very simplified use case of AI models to demonstrate how easy it is to integrate CloudFlare AI on apps. This project uses two models,&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@hf/thebloke/mistral-7b-instruct-v0.1-awq&lt;/code&gt; to generate the story and extracting a suitable illustration description&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@cf/stabilityai/stable-diffusion-xl-base-1.0&lt;/code&gt; to generate the illustration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Try here: &lt;a href="https://storyflare.pages.dev/" rel="nofollow noopener noreferrer"&gt;https://storyflare.pages.dev/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Example prompt:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Story about photosynthesis for high school students&lt;/li&gt;
&lt;li&gt;Story describing renewable energy for kids&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Requirements&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Node v20.12.0&lt;/li&gt;
&lt;li&gt;npm v10.5.0&lt;/li&gt;
&lt;li&gt;Wrangler v3.0.0 or newer&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Tech Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Vite&lt;/li&gt;
&lt;li&gt;React&lt;/li&gt;
&lt;li&gt;Mantine&lt;/li&gt;
&lt;li&gt;CloudFlare services used: Pages, Workers AI&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Deployment&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; clone the repo&lt;/span&gt;
git clone https://github.com/fahminlb33/storyflare.git

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; change workdir&lt;/span&gt;
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; storyflare

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; install dependencies&lt;/span&gt;
npm install

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; build repo&lt;/span&gt;
npm run build

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; deploy to CloudFlare Pages&lt;/span&gt;
npm run deploy&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After the site has been deployed, add Worker AI binding to the Pages instance.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Login to…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/storyflare" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Tech stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudFlare Pages, Workers AI&lt;/li&gt;
&lt;li&gt;Vite&lt;/li&gt;
&lt;li&gt;React&lt;/li&gt;
&lt;li&gt;Mantine&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Journey
&lt;/h2&gt;

&lt;p&gt;This is not the first time I worked with CloudFlare Workers, but this is the first time I tried the Workers AI services. I have to say the models are plenty and the inference time is quite fast too. With easy integration using Cloudflare AI SDK and easy deployment using &lt;code&gt;wrangler&lt;/code&gt;, my experience with developing this app has been really fun.&lt;/p&gt;

&lt;p&gt;The idea behind this app is to create a “benchmark” &lt;em&gt;how easy it is to use CloudFlare AI compared to other services?&lt;/em&gt; So, I created the simplest app possible I can imagine, (1) create stories using LLM and (2) create illustrations using those stories.&lt;/p&gt;

&lt;p&gt;To achieve this, I used two AI models:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple Models&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Text Generation&lt;/strong&gt;: &lt;code&gt;@hf/thebloke/mistral-7b-instruct-v0.1-awq&lt;/code&gt; to generate the story and extracting a suitable illustration description.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text-to-Image&lt;/strong&gt;: &lt;code&gt;@cf/stabilityai/stable-diffusion-xl-base-1.0&lt;/code&gt; to generate the illustration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I built the web app using Vite, React, and Mantine and the backend using CloudFlare Page Functions. Overall, the creation and integration process was smooth and I couldn’t appreciate enough how wrangler makes it easy to deploy the web app.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The CloudFlare platform has grown into a much larger and complete ecosystem, offering developers a complete set of tools to build and scale their apps. With great developer experience offered by &lt;code&gt;wrangler&lt;/code&gt;, developing apps using CloudFlare is really a bliss.&lt;/p&gt;

&lt;p&gt;However, I also faced some limitation with the Mistral model, which are the limited number of tokens. Sometimes the generated stories are incomplete, but we can solve this by tuning the prompt to include the maximum number of sentences per story.&lt;/p&gt;

&lt;p&gt;Another room for improvement is to implement story continuation, in case the previous response is incomplete or trying different art style to make the illustration more appealing.&lt;/p&gt;

</description>
      <category>cloudflarechallenge</category>
      <category>devchallenge</category>
      <category>ai</category>
    </item>
    <item>
      <title>Redis Status Page, Redesigned</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Mon, 29 Aug 2022 16:29:30 +0000</pubDate>
      <link>https://forem.com/fahminlb33/redis-status-page-redesigned-3i7c</link>
      <guid>https://forem.com/fahminlb33/redis-status-page-redesigned-3i7c</guid>
      <description>&lt;p&gt;You might still remember my previous post when I created a status page using Blazor Server and Redis. In the previous post I mentioned one of the downsides of the implementation which is overeliance to LINQ and Redis.OM, caused by my naive approach to data modelling using Redis.&lt;/p&gt;

&lt;p&gt;In this new version, I tried to maximize the use of Redis data structures (List, Set, and Hashes) to better store the service uptime data so it will be easier to store and query.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The first version
&lt;/h3&gt;

&lt;p&gt;In the first version, I use Redis.OM to store the data and LINQ to query and format the data. Using this approach the data storage and query quickly became complex, since I tried to model the data as it was a table.&lt;/p&gt;

&lt;p&gt;Storing the data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// save to redis using OM&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_cnProvider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RedisCollection&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MonitoringSnapshot&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;InsertAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;MonitoringSnapshot&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;UnixTimestamp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToUnixSeconds&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;ServiceName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Healthy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;healthy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Latency&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;latency&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Not that bad, eh?&lt;/p&gt;

&lt;p&gt;Querying the data:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// get collection&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_cnProvider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RedisCollection&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MonitoringSnapshot&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// query all data&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnixTimestamp&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;nowUnixEpoch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timestamps&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GroupBy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ServiceName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;First&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;OrderBy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnixTimestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;DateTimeHelpers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromUnixSeconds&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="kt"&gt;long&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnixTimestamp&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;serviceLatency&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GroupBy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ServiceName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToDictionary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;OrderBy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnixTimestamp&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Latency&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;There's so many &lt;code&gt;GroupBy&lt;/code&gt; and other commands, not only the querying process is a bit complex, but the stored data is also not easy to scan in RedisInsight.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl17qhhvgngxhob6p6o7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl17qhhvgngxhob6p6o7e.png" alt="All the keys is auto-generated, hard to query directly" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The new version
&lt;/h3&gt;

&lt;p&gt;In the new version, I tried to use Set, List, and Hash to store the data with my own key pattern. The result is much cleaner and easier data to query.&lt;/p&gt;

&lt;p&gt;Storing the data:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// get today date&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timestampDate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"yyyy-MM-dd"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// save to redis using OM&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timestampKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TimestampKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestampDate&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;latencyKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LatencyKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestampDate&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;healthKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HealthKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestampDate&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// get redis db&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_cn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetDatabase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// set current status&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HashSetAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ServiceLastStatusKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;healthy&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// add to set&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SetAddAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ServicesSetKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// add timestamp, health, and latency status&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListRightPushAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timestampKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DateTimeHelpers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToUnixSeconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListRightPushAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;latencyKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;latency&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListRightPushAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;healthKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;healthy&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Querying the data:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// query all data&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timestampKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TimestampKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;First&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;nowDate&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timestampValues&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListRangeAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timestampKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timestamps&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;timestampValues&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;DateTimeHelpers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromUnixSeconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Convert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToInt64&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// get latency data from all services&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;latencyDict&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Dictionary&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;();&lt;/span&gt;
&lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// get latency history&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;latencyKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LatencyKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nowDate&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;latencyHistory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListRangeAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;latencyKey&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Convert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToInt32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// add to dict&lt;/span&gt;
    &lt;span class="n"&gt;latencyDict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;latencyHistory&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futl256ly3zzmgnuaq15t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futl256ly3zzmgnuaq15t.png" alt="Key is now more organized and easy to scan" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not only the query logic is much straightforward, the stored data is now much easier to read in RedisInsight. This is a major benefit since you can use other data visualization tools such as Grafana to visualize the collected data.&lt;/p&gt;

&lt;p&gt;I've updated the code on my repository and you can simply clone and run it again!&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/RedisStatusPage"&gt;
        RedisStatusPage
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Status page for your microservice apps
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;RedisStatusPage&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Status page for your next microservice backend apps!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/fahminlb33/status-page-with-redis-5gh7" rel="nofollow"&gt;Read on DEV.to&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This app started as an idea to integrate to my other project, &lt;a href="https://github.com/fahminlb33/ritsu-pi"&gt;ritsu-pi&lt;/a&gt;, it's a Raspberry Pi home server project where you can deploy all kind of apps to a single RPi (or even a cluster). Redis Hackathon comes in just the right moment to give me an extra motivation to finish this project :)&lt;/p&gt;

&lt;p&gt;This project is basically a Status Page (like Github Status, Azure Status, Atlassian Statuspage, or something similar) built on top of Blazor Server and Redis. Here you can define a "health check" and get reports when one of your service is down. Also it has a Discord webhook client that will send you a message when one of your service status has changed.&lt;/p&gt;


&lt;ul&gt;
&lt;li&gt;Monitor HTTP/TCP service uptime&lt;/li&gt;
&lt;li&gt;Simple latency graph over time&lt;/li&gt;
&lt;li&gt;Incident report when one of the services status has changed&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/RedisStatusPage"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
    </item>
    <item>
      <title>Status Page, with Redis</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Mon, 22 Aug 2022 13:29:00 +0000</pubDate>
      <link>https://forem.com/fahminlb33/status-page-with-redis-5gh7</link>
      <guid>https://forem.com/fahminlb33/status-page-with-redis-5gh7</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;This project is basically a Status Page (like Github Status, Azure Status, Atlassian Statuspage, or something similar) built on top of Blazor Server and Redis. Here you can define a "health check" and get reports when one of your service is down. Also it has a Discord webhook client that will send you a message when one of your service status has changed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor HTTP/TCP service uptime&lt;/li&gt;
&lt;li&gt;Simple latency graph over time&lt;/li&gt;
&lt;li&gt;Incident report when one of the services status has changed&lt;/li&gt;
&lt;li&gt;Discord webhook notification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F8360880%2F185859218-9b80ef50-4c8c-486f-baf0-8b846d4feedf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F8360880%2F185859218-9b80ef50-4c8c-486f-baf0-8b846d4feedf.png" alt="RedisStatusPage home page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Project video overview&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/X98C4xnlrOI" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcg8ssri3pzhn5ly4gys.png" alt="Status Page, with Redis Video"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category
&lt;/h3&gt;

&lt;p&gt;MEAN/MERN Maverick&lt;/p&gt;

&lt;h3&gt;
  
  
  Language Used
&lt;/h3&gt;

&lt;p&gt;C#/Blazor Server&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/RedisStatusPage" rel="noopener noreferrer"&gt;
        RedisStatusPage
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Status page for your microservice apps
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;RedisStatusPage&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Status page for your next microservice backend apps!&lt;/p&gt;
&lt;p&gt;&lt;a href="https://dev.to/fahminlb33/status-page-with-redis-5gh7" rel="nofollow"&gt;Read on DEV.to&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This app started as an idea to integrate to my other project, &lt;a href="https://github.com/fahminlb33/ritsu-pi" rel="noopener noreferrer"&gt;ritsu-pi&lt;/a&gt;, it's a Raspberry Pi home server project where you can deploy all kind of apps to a single RPi (or even a cluster). Redis Hackathon comes in just the right moment to give me an extra motivation to finish this project :)&lt;/p&gt;
&lt;p&gt;This project is basically a Status Page (like Github Status, Azure Status, Atlassian Statuspage, or something similar) built on top of Blazor Server and Redis. Here you can define a "health check" and get reports when one of your service is down. Also it has a Discord webhook client that will send you a message when one of your service status has changed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Monitor HTTP/TCP service uptime&lt;/li&gt;
&lt;li&gt;Simple latency graph over time&lt;/li&gt;
&lt;li&gt;Incident report when one of the services status has changed&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/RedisStatusPage" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Random advice: Read as many resources before starting a hackathon project...
&lt;/h3&gt;

&lt;p&gt;Here's one of &lt;em&gt;bruh&lt;/em&gt; moment when I was almost finished working on this project.&lt;/p&gt;

&lt;p&gt;Like many developers out there I usually work with SQL or document DB and in this project I also used Redis.OM to help me use Redis faster via LINQ APIs. &lt;/p&gt;

&lt;p&gt;But then I checked out the Redis sample app, Redis Analytics Bitmaps demo.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Bruh... damn&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I never thought of using many different keys and data structure to store my service monitoring data. Why I don't use Set with a many different part of the service and timestamp as key and simply query it using something like &lt;code&gt;SCAN&lt;/code&gt;? Why I have to bother dealing with complicated LINQ queries when Redis may have a much cleaner alternative? I simply haven't learned Redis enough.&lt;/p&gt;

&lt;p&gt;At that time I'm still thinking of modelling data as table in Redis, and ignores the fact that Redis is different from SQL database and document DB like Mongo and I was stuck with that mindset for quite some time.&lt;/p&gt;

&lt;p&gt;After I realized there are many more alternatives to model data in Redis, I felt I have to rewrite my project to embrace Redis data structures.&lt;/p&gt;

&lt;p&gt;While I'm working on the improved version of this project, the current implementation of this project is already finished and working properly, you can try to run it right now :D and I will try to rewrite it maybe over the weekend, but I don't know when I will have time to finish it before the hackathon ends.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Check out &lt;a href="https://redis.io/docs/stack/get-started/clients/#high-level-client-libraries" rel="noopener noreferrer"&gt;Redis OM&lt;/a&gt;, client libraries for working with Redis as a multi-model database.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Use &lt;a href="https://redis.info/redisinsight" rel="noopener noreferrer"&gt;RedisInsight&lt;/a&gt; to visualize your data in Redis.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Sign up for a &lt;a href="https://redis.info/try-free-dev-to" rel="noopener noreferrer"&gt;free Redis database&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redishackathon</category>
      <category>dotnet</category>
      <category>blazor</category>
      <category>docker</category>
    </item>
    <item>
      <title>Automate Notion Kanban Report to MongoDB</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Sun, 26 Dec 2021 16:13:38 +0000</pubDate>
      <link>https://forem.com/fahminlb33/automate-notion-kanban-report-to-mongodb-a63</link>
      <guid>https://forem.com/fahminlb33/automate-notion-kanban-report-to-mongodb-a63</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;I use Notion Kanban extensively to track my daily task and at the end of every month, I'll create a report to get a summary of my work for a whole month. This way I can manage my work-life balance more efficiently.&lt;/p&gt;

&lt;p&gt;At first it was fun to setup the kanban board and create my first report, but doing it repeatedly for every month is kinda cumbersome. So I wanted to automate this process and create a simple dashboard that I can visit anytime.&lt;/p&gt;

&lt;p&gt;MongoDB Hackathon came in just the right time for me to explore serverless technology to automate this reporting task and store the data into a database.&lt;/p&gt;

&lt;p&gt;This project aims to automate the process of creating reports from Notion kanban board and then store it into MongoDB Serverless Instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category
&lt;/h3&gt;

&lt;p&gt;Automation Innovation&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/FahmiNotionAutomation"&gt;
        FahmiNotionAutomation
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This is my attempt at MongoDB Hackathon
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Notion Kanban Automation&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;This repo contains Azure Functions project which will automatically reads all cards from Notion database (in this case kanban board) and then store it into MongoDB Serverless Instance. Not only that, my kanban has several properties to make it more informative like story points, priority, and category, making it somwehat like a JIRA board.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;The Idea&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;I use Notion Kanban extensively to track my daily task and at the end of every month, I'll create a report to get a summary of my work for a whole month. This way I can manage my work-life balance more efficiently.&lt;/p&gt;
&lt;p&gt;At first it was fun to setup the kanban board and create my first report, but doing it repeatedly every month is cumbersome. So I wanted to automate this process and create a simple dashboard that I can visit anytime.&lt;/p&gt;
&lt;p&gt;MongoDB Hackathon came in just the right time for…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/fahminlb33/FahmiNotionAutomation"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;You can access the API and the dashboard from the link in my repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3EDMryZg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/fahminlb33/FahmiNotionAutomation/raw/master/dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3EDMryZg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/fahminlb33/FahmiNotionAutomation/raw/master/dashboard.png" alt="" width="685" height="909"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Tech
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.atlas.mongodb.com/tutorial/create-new-serverless-instance/"&gt;MongoDB Atlas (serverless)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/services/functions/"&gt;Azure Functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.notion.so/"&gt;Notion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bulma.io/"&gt;Bulma&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://plotly.com/"&gt;Plotly&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://automapper.org/"&gt;AutoMapper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.newtonsoft.com/json"&gt;Newtonsoft.Json&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bolorundurowb/dotenv.net"&gt;dotenv.net&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>atlashackathon</category>
      <category>notion</category>
      <category>azure</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Service Uptime Monitor using Github Actions</title>
      <dc:creator>Fahmi Noor Fiqri</dc:creator>
      <pubDate>Thu, 03 Sep 2020 10:01:00 +0000</pubDate>
      <link>https://forem.com/fahminlb33/service-uptime-monitor-using-github-actions-2egp</link>
      <guid>https://forem.com/fahminlb33/service-uptime-monitor-using-github-actions-2egp</guid>
      <description>&lt;h3&gt;
  
  
  My Workflow
&lt;/h3&gt;

&lt;p&gt;I've been working in a company for about 1 year now. We have a private cloud, but it's not very reliable. Sometimes our container will suddenly stopped or having our Gitlab EE unavailable, even in busy hour.&lt;/p&gt;

&lt;p&gt;After today's daily stand up, one of my coworker said it will be fun to have a service to monitor our services, to tell when was the last time the DevOps team messed up the containers. Like in the "days without tricks meme."&lt;/p&gt;

&lt;p&gt;So I think &lt;em&gt;"why not?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here I have created a workflow to check for my service uptime, runs every 5 minutes on Github Actions. This action will execute a NodeJS app and updates a file to report latest up/downtime. This script will make a HTTP request to the service and save the HTTP status to a JSON file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Wacky Wildcards&lt;/p&gt;

&lt;h3&gt;
  
  
  Yaml File or Link to Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Node.js CI&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*/5&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Use Node.js&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;12.x&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm start&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Commit and push if changed&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;git add .&lt;/span&gt;
        &lt;span class="s"&gt;git diff&lt;/span&gt;
        &lt;span class="s"&gt;git config --global user.email "github-action-bot@example.com"&lt;/span&gt;
        &lt;span class="s"&gt;git config --global user.name "GitHub Action Bot"&lt;/span&gt;
        &lt;span class="s"&gt;git commit -m "Updated incident" -a || echo "No changes to commit"&lt;/span&gt;
        &lt;span class="s"&gt;git push&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/fahminlb33" rel="noopener noreferrer"&gt;
        fahminlb33
      &lt;/a&gt; / &lt;a href="https://github.com/fahminlb33/statuspage-js" rel="noopener noreferrer"&gt;
        statuspage-js
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Monitor my squad container status
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;For the HTML I've used Matt Smith's countdown template from &lt;a href="https://codepen.io/AllThingsSmitty/pen/JJavZN" rel="noopener noreferrer"&gt;https://codepen.io/AllThingsSmitty/pen/JJavZN&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>actionshackathon</category>
    </item>
  </channel>
</rss>
