<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Esther </title>
    <description>The latest articles on Forem by Esther  (@catheryn).</description>
    <link>https://forem.com/catheryn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/catheryn"/>
    <language>en</language>
    <item>
      <title>Upload Large Folders to Cloudflare R2</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Sun, 05 Apr 2026 18:18:40 +0000</pubDate>
      <link>https://forem.com/catheryn/upload-large-folders-to-cloudflare-r2-456o</link>
      <guid>https://forem.com/catheryn/upload-large-folders-to-cloudflare-r2-456o</guid>
      <description>&lt;p&gt;Cloudflare R2 object storage has a limitation: the web interface only allows uploading folders containing fewer than 100 files. To upload folders with more than 100 files, you typically need to set up Cloudflare Workers or use the S3 API with custom code.&lt;/p&gt;

&lt;p&gt;Rclone makes this process easy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 - Install Rclone
&lt;/h2&gt;

&lt;p&gt;Rclone ↗ is a command-line tool for managing files on cloud storage. Rclone works well for uploading multiple files from your local machine or copying data from other cloud storage providers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;rclone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Windows:&lt;/strong&gt;&lt;br&gt;
Download the installer from &lt;a href="//rclone.org/install/#windows"&gt;rclone.org/install/#windows&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2 - Create Cloudflare API Keys
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfwcln1vtfosvh6r86r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfwcln1vtfosvh6r86r2.png" alt="An image showing cloudflare R2 dashboard highlighting the manage button" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;From your Cloudflare R2 dashboard, click the Manage button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new user API token:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Enter a &lt;strong&gt;Token Name&lt;/strong&gt; (e.g. &lt;em&gt;r2-upload-token&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Permission&lt;/strong&gt;, select &lt;em&gt;Object Read &amp;amp; Write&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Specify buckets&lt;/strong&gt;, choose the bucket(s) you want to allow access to or allow all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creation, you will receive: Access Key ID, Secret Access Token, Endpoint (e.g., https://.r2.cloudflarestorage.com)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Save these credentials immediately because you won’t be able to see the secret key again.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Step 3 - Configure Rclone
&lt;/h2&gt;

&lt;p&gt;Run the configuration command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Select new remote&lt;/li&gt;
&lt;li&gt;Enter name of new remote &lt;em&gt;(you'll use this later)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Select storage &amp;gt; 4 (Amazon S3 Compliant Storage Providers)&lt;/li&gt;
&lt;li&gt;Select provider &amp;gt; 7 (Cloudflare)&lt;/li&gt;
&lt;li&gt;env_auth &amp;gt; 1 (Enter AWS credentials in the next step)&lt;/li&gt;
&lt;li&gt;Enter access_key_id&lt;/li&gt;
&lt;li&gt;Enter secret_access_key&lt;/li&gt;
&lt;li&gt;Select region - auto (Leave empty or enter 1)&lt;/li&gt;
&lt;li&gt;Enter endpoint&lt;/li&gt;
&lt;li&gt;Select No for Edit Advanced Config &amp;gt; n (n for No)&lt;/li&gt;
&lt;li&gt;Keep this remote &amp;gt; y&lt;/li&gt;
&lt;li&gt;Quit config &amp;gt; q&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 4: Upload Your Folder
&lt;/h2&gt;

&lt;p&gt;Use the rclone copy command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rclone copy &lt;span class="nt"&gt;-vv&lt;/span&gt; &amp;lt;local_folder_path&amp;gt; &amp;lt;remote_name&amp;gt;:&amp;lt;bucket_name&amp;gt;/&amp;lt;destination_folder&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rclone copy &lt;span class="nt"&gt;-vv&lt;/span&gt; /Users/Dev/project/images my-rclone-remote:images/apparels
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The -vv flag gives verbose output so you can watch the upload progress. rclone also skips any file that has already been uploaded.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Verify the Upload
&lt;/h2&gt;

&lt;p&gt;List the bucket and count the files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rclone &lt;span class="nb"&gt;ls&lt;/span&gt; &amp;lt;remote_name&amp;gt;:&amp;lt;bucket_name&amp;gt; | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>fileuploads</category>
      <category>s3</category>
      <category>automation</category>
      <category>cloudstorage</category>
    </item>
    <item>
      <title>What MCP Actually Is (And Why It Exists)</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Sat, 21 Mar 2026 19:05:18 +0000</pubDate>
      <link>https://forem.com/catheryn/what-mcp-actually-is-and-why-it-exists-3e1j</link>
      <guid>https://forem.com/catheryn/what-mcp-actually-is-and-why-it-exists-3e1j</guid>
      <description>&lt;h2&gt;
  
  
  What is MCP?
&lt;/h2&gt;

&lt;p&gt;MCPs are a way to give AI applications the external context/capabilities they need to complete their mission.&lt;/p&gt;

&lt;p&gt;Kind of a blanket statement, I know. You're probably wondering: &lt;em&gt;doesn’t RAG already do this? What about tools?&lt;/em&gt; And you'd be right to think that, &lt;em&gt;somewhat&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;These are all ways to give AI context.&lt;/p&gt;

&lt;p&gt;With RAG, you typically embed and store context somewhere if you're not trying to overshoot your context window by just sending an entire PDF to the LLM. This context is then retrieved each time the user makes a query to your app.&lt;/p&gt;

&lt;p&gt;Although, realistically, you should have some form of reasoning that decides whether the user's query actually needs additional context because they could literally just be asking your app:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"How are you?" 😭&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Anyways, Back to MCPs.&lt;/p&gt;

&lt;p&gt;MCPs are more closely related to &lt;strong&gt;tools (function calling)&lt;/strong&gt;, but they also support &lt;strong&gt;resources&lt;/strong&gt;, which makes them relevant to RAG-like systems too.&lt;/p&gt;

&lt;p&gt;And I’m going to explain &lt;em&gt;why MCP exists&lt;/em&gt; with a short example:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem MCP Solves
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You have a nice OpenAI agent that does research. This agent makes tool calls to Google Scholar using the Serper API + a bunch of other services.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;PS: Serper wraps the Google API because working directly with Google is a nightmare.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Along the way, you realise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“&lt;em&gt;Ugh, I don't even like OpenAI. Why am I using this?&lt;/em&gt;”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also, they jumped into that deal with the Pentagon, sayonara! ✌🏽 LangChain, here I come.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You read the LangChain docs and realise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“&lt;em&gt;Dang. I have to rewrite my tools from scratch to fit LangChain’s syntax.&lt;/em&gt;”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So now you're rewriting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;agent logic
&lt;/li&gt;
&lt;li&gt;tool logic
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Months later, you realise:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you hate LangChain
&lt;/li&gt;
&lt;li&gt;you hate their syntax
&lt;/li&gt;
&lt;li&gt;you hate installing 100 packages just to use a new LLM
&lt;/li&gt;
&lt;li&gt;you especially hate &lt;code&gt;RunnablePassthrough&lt;/code&gt; and those damn pipes &lt;code&gt;|&lt;/code&gt; 😭&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So now, CrewAI it is 🫠&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You read CrewAI docs. And yes, you guessed it. You're rewriting everything again: agents, tools, the whole shebang. Sigh.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, you're probably sick of it all.&lt;/p&gt;

&lt;p&gt;Unfortunately, you &lt;strong&gt;cannot avoid rewriting your agents&lt;/strong&gt; when switching platforms (for now).&lt;/p&gt;

&lt;p&gt;BUUUUT you &lt;em&gt;can&lt;/em&gt; avoid rewriting your &lt;strong&gt;tools&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  🚀 Enter MCP
&lt;/h3&gt;

&lt;p&gt;MCP stands for &lt;strong&gt;Model Context Protocol&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s a standard created by Anthropic to ensure that LLMs can connect to external data sources and tools seamlessly.&lt;/p&gt;

&lt;p&gt;What this means is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can build your tools once (as an MCP server), and any MCP-compatible platform can use them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No more major rewrites every time you switch frameworks 🎉&lt;/p&gt;

&lt;p&gt;MCPs were initially built to give LLMs like Claude (and potentially other models via MCP clients) access to more data sources. But now, they’ve evolved into something much bigger: a standard way for &lt;strong&gt;AI agents to access tools and context&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of MCP
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Access to a growing ecosystem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With MCP, you get access to a whole ecosystem of tools. Many companies are building MCP servers for their platforms, exposing powerful capabilities.&lt;/p&gt;

&lt;p&gt;You can explore them here: &lt;a href="https://mcpservers.org/" rel="noopener noreferrer"&gt;https://mcpservers.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Access to local + external data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MCPs can access both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;external systems (APIs, services)
&lt;/li&gt;
&lt;li&gt;local data (your machine, private files, internal knowledge bases)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So if you want Claude, ChatGPT, Cursor, or any external agent to access your &lt;strong&gt;private local knowledge&lt;/strong&gt;, MCP is a great option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Structured access to capabilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MCP doesn’t just give access, it structures it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core MCP Concepts
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt; Tools provide &lt;strong&gt;capabilities&lt;/strong&gt;. They allow AI applications to perform actions on behalf of users.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;external APIs that do things&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; Resources provide &lt;strong&gt;information&lt;/strong&gt;. They allow AI systems to retrieve structured data and pass it as context to models.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;knowledge, documents, data sources&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We also have Prompts, but I got tired writing this article so read about it here: &lt;a href="https://modelcontextprotocol.io/docs/learn/server-concepts#resources" rel="noopener noreferrer"&gt;https://modelcontextprotocol.io/docs/learn/server-concepts#resources&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Important Note
&lt;/h3&gt;

&lt;p&gt;For an agent to use MCP tools, it &lt;strong&gt;must&lt;/strong&gt; support the MCP protocol (i.e., be an MCP client). That said, platforms like CrewAI abstract this away, so you don’t always have to manually set this up but some platforms might not have this capability.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Deployment Types
&lt;/h3&gt;

&lt;p&gt;MCP servers can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local&lt;/strong&gt; (running on your machine)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote&lt;/strong&gt; (hosted elsewhere)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When MCP first came out, it was heavily tied to Claude and required you to build your own server. Now, there are tons of ready-to-use servers.&lt;/p&gt;

&lt;p&gt;Check some here:&lt;a href="https://platform.claude.com/docs/en/agents-and-tools/remote-mcp-servers" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/agents-and-tools/remote-mcp-servers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most platforms now just ask for an MCP URL, and you’re good to go.&lt;/p&gt;

&lt;p&gt;Buuuuut, you can build your own. You don’t have to rely on existing servers. If you need a very specific capability, you can build your own MCP server tailored to your use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  When It Makes Sense to Use MCP
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You’re a large company and want a &lt;strong&gt;standardized way to expose platform capabilities&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You utilise a &lt;strong&gt;lot of tools&lt;/strong&gt; (this is where MCP really shines)
&lt;/li&gt;
&lt;li&gt;You want &lt;strong&gt;third-party agents&lt;/strong&gt; to use your tools
&lt;/li&gt;
&lt;li&gt;You’re building &lt;strong&gt;reusable capabilities&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You have an &lt;strong&gt;ecosystem of agents&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You don’t want your tooling layer locked into one platform (👀 OpenAI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Take your pick 😁.&lt;/p&gt;

&lt;h3&gt;
  
  
  When It Does NOT Make Sense 🚫
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Because it’s “trending.”
&lt;/li&gt;
&lt;li&gt;You have one agent calling one tool. Please just write your tool yourself. 😭&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCP is for &lt;strong&gt;robust systems&lt;/strong&gt;, not overengineering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where MCP Does NOT Help ❌
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. It does NOT standardize agent logic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you switch platforms, you will still rewrite your agents.That said, with modern AI coding tools, it’s like an hour of work max.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Reducing complexity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You now have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an MCP server
&lt;/li&gt;
&lt;li&gt;a protocol layer
&lt;/li&gt;
&lt;li&gt;more moving parts
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For small projects, this is overkill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Latency tradeoff&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before:&lt;br&gt;
Agent → Tool&lt;/p&gt;

&lt;p&gt;Now:&lt;br&gt;
Agent → MCP → Tool&lt;/p&gt;

&lt;p&gt;Congratulations! You just introduced an extra hop. 👏&lt;/p&gt;

&lt;h3&gt;
  
  
  My Final Thoughts
&lt;/h3&gt;

&lt;p&gt;MCP is not about making agents easier or doing fancy things with LLMs. It’s about making &lt;strong&gt;systems reusable, scalable, and interoperable&lt;/strong&gt;. It's also about giving agents all the tools (pun intended) they need to succeed. You can do some serious analysis by plugging an LLM into internal data.&lt;/p&gt;

&lt;p&gt;Now, we wait for Anthropic to release a protocol that standardizes agents too 💀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>mcp</category>
      <category>rag</category>
    </item>
    <item>
      <title>Motion Detection In OpenCV Explained In-Depth</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Wed, 01 Jan 2025 16:01:15 +0000</pubDate>
      <link>https://forem.com/catheryn/motion-detection-in-opencv-explained-in-depth-di6</link>
      <guid>https://forem.com/catheryn/motion-detection-in-opencv-explained-in-depth-di6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TLDR: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Background are the pixels in a frame that remain static over time.&lt;/li&gt;
&lt;li&gt;Foreground are the pixels in a frame that keep changing.&lt;/li&gt;
&lt;li&gt;For each frame, a log of the pixels are kept.&lt;/li&gt;
&lt;li&gt;To detect motion, we compare the current pixels in the current frame with their history.&lt;/li&gt;
&lt;li&gt;If there is a massive change in intensity, we can safely call it motion detection.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Background subtraction is a technique used in computer vision to identify moving objects in a video by separating them from the background, basically subtracting an object from the background so they can be tracked independently. &lt;/p&gt;

&lt;p&gt;As you probably know, frames are individual pictures or images in a video. A video is made up of many frames shown quickly, one after the other, to create the illusion of movement. Think of frames like pages in a flipbook. When you flip through them fast, they make an animated story.&lt;/p&gt;

&lt;p&gt;In background subtraction, each frame of the video is compared to a background model (a static reference image of the scene created at different points in time). Any significant difference between the current frame being shown and the background model is considered as &lt;strong&gt;foreground&lt;/strong&gt;, thus indicating motion or change. &lt;/p&gt;

&lt;p&gt;To achieve this, the subtraction approach being used in OpenCV is K-nearest neighbors (KNN). This approach classifies each pixel as background or foreground by looking at the color values of its nearest neighbors (K) in a certain time window (history). If the nearest neighbours are below a certain threshold (which represents the "closeness" you accept), then the pixel is considered to be similar to its historical values, and will be classified as background.&lt;br&gt;
If the distance is large, the pixel will be classified as foreground.&lt;/p&gt;

&lt;p&gt;For example, if a pixel has been black (0) for the last 399 frames and suddenly turns white (255) in the current frame:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The algorithm checks the nearest neighbors (the number of nearest neighbors is decided internally by the algorithm) from the 400-frame history.&lt;/li&gt;
&lt;li&gt;If all the nearest neighbors are black, the current white pixel will likely be classified as foreground because it's too different from the background model and thus motion is detected through a change in the pixels.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The OpenCV function looks like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;retval = cv2.createBackgroundSubtractorKNN([, history[, dist2Threshold[, detectShadows]]])&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How KNN Works for Background Subtraction:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keeping a history:&lt;/strong&gt; For every single pixel in the frame, KNN maintains a history of its previous pixel values. Imagine every single pixel has an array of historical values from past frames. This history acts as a model of what that pixel's intensity should look like if it were part of the background. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Comparing current frame with history:&lt;/strong&gt; When a new frame is captured, the KNN algorithm checks the intensity value of each pixel and compares it with the stored history of that pixel.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, background subtraction is a simple and effective way to identify moving objects in a scene by comparing each frame to a model of the static background.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Canny Edge Detection</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Tue, 19 Nov 2024 02:40:05 +0000</pubDate>
      <link>https://forem.com/catheryn/canny-edge-detection-3h4l</link>
      <guid>https://forem.com/catheryn/canny-edge-detection-3h4l</guid>
      <description>&lt;p&gt;Edge detection is an image processing technique in computer vision that involves identifying the outline of objects in an image. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqhh392prmwzeppi5iiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqhh392prmwzeppi5iiw.png" alt="A picture depicting edge detection in OpenCV" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Canny edge detection is one of the best techniques for edge detection. It’s designed to detect clean, well-defined edges while reducing noise and avoiding false edges. It uses a double thresholding method to detect edges in an image: a high and a low threshold.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Canny&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;photo.jpg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;img_edges&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Canny&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// 100 is the low threshold&lt;/span&gt;
&lt;span class="c1"&gt;// 200 is the high threshold&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The thresholds decide what becomes an edge and what doesn't. To make this decision, we use gradient values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If a gradient value is above the high threshold, it’s considered a strong edge and added to the edge map. (strong edge)&lt;/li&gt;
&lt;li&gt;If it’s below the low threshold, it’s ignored. (non edge)&lt;/li&gt;
&lt;li&gt;If it is between the high and low threshold, it is only kept if it is connected to a strong edge. (potential edge)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are gradient values?
&lt;/h2&gt;

&lt;p&gt;Gradient values are not the raw image values. They are computed numbers derived from the raw image by checking how much the pixel intensity changes in an image. We use gradient values because the raw image values don’t directly tell us where the edges are. &lt;/p&gt;

&lt;p&gt;A simple example to illustrate changes in pixel intensity: if two neighboring pixels have very different values (e.g. 50 and 200 and the gradient value is 150), there’s a big change — it might be an edge. But if two neighboring pixels have similar values (e.g. 50 and 52 and the gradient value is 2), there’s little change &amp;amp; very little possibility of being an edge. &lt;/p&gt;

&lt;p&gt;After the gradient values are computed, they are then compared against the thresholds to decide what qualifies as a strong edge, a potential edge or a non edge.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do we know values in between thresholds are connected to a strong edge?
&lt;/h3&gt;

&lt;p&gt;By using a method called edge tracking by hysteresis which decides edges that are connected and should be kept VS discarded. This algorithm works by looking at the 8 neighbors (directly adjacent pixels - top, bottom, left, right, and diagonals) of each potential edge pixel. Any pixel directly or indirectly connected to a strong edge is included in the final result.&lt;/p&gt;

&lt;h2&gt;
  
  
  How edge tracking works:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   50   80  110   90
   70  250  190  120
   60  180  150   70
   40   60   80   50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Imagine this gradient map above:&lt;/p&gt;

&lt;p&gt;After applying thresholds (low = 100, high = 200), the strong edge pixels ( &amp;gt; 200) are immediately kept as edges. Here, only the pixel 250 is marked as a strong edge.&lt;/p&gt;

&lt;p&gt;The potential edge pixels (100–200) are 110, 190, 120, 180 and 150. Now that we have a pool of potential edges, we perform edge tracking to decide what gets to stay &amp;amp; what is discarded. The algorithm checks if any of the potential edges are directly or indirectly connected to the strong edge (250).&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;190 is a neighbor of 250, it is directly connected to a strong edge so it's kept.&lt;/li&gt;
&lt;li&gt;150 is a neighbor of 190, it is indirectly connected to a strong edge so it’s also kept.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Weak edge pixels (&amp;lt; 100) like 80, 90 and the rest are completely ignored, as they are considered noise.They will not be a part of the final image.&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>How to Build a Telegram Bot in 5 Simple Steps</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Mon, 14 Oct 2024 00:48:49 +0000</pubDate>
      <link>https://forem.com/catheryn/how-to-build-a-telegram-bot-in-5-simple-steps-4964</link>
      <guid>https://forem.com/catheryn/how-to-build-a-telegram-bot-in-5-simple-steps-4964</guid>
      <description>&lt;p&gt;Building a Telegram bot might seem difficult, but it’s easier than you think! Whether you want to create a fun chatbot, an information service, or something unique, Telegram’s API provides a flexible framework for developers of all skill levels. In this guide, we’ll walk through the process step-by-step, so by the end, you'll have a fully functioning bot ready to interact with users.&lt;/p&gt;

&lt;p&gt;Before we get started, a little introduction on how it works. Telegram bots are powered by the telegram API: &lt;code&gt;https://api.telegram.org/bot&amp;lt;YOUR_BOT_TOKEN&amp;gt;&lt;/code&gt;. The bot is a script that queries this API using an HTTPS request. While you can choose to interact with the API directly, libraries make it easier. This guide will be based on a Python library. You can find the API documentation &lt;a href="https://core.telegram.org/bots/api" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Also, you can find the scripts here: &lt;a href="https://github.com/Queen-esther01/Bots/tree/main/node" rel="noopener noreferrer"&gt;Nodejs&lt;/a&gt;, &lt;a href="https://github.com/Queen-esther01/Bots/tree/main/python" rel="noopener noreferrer"&gt;Python&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's start this exciting journey of bringing your bot idea to life in just five simple steps!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Create a New Bot &amp;amp; Generate Bot Token&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The first step in building a Telegram bot is to create one via the Telegram platform. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Telegram and search for &lt;strong&gt;BotFather&lt;/strong&gt;, the official Telegram bot for creating and managing bots.&lt;/li&gt;
&lt;li&gt;Start a chat with the BotFather and use the command &lt;code&gt;/newbot&lt;/code&gt;. It will ask you to choose a name for your bot, provide a unique username.&lt;/li&gt;
&lt;li&gt;Once you’ve successfully created your bot, BotFather will provide a &lt;strong&gt;Bot Token&lt;/strong&gt;— this will serve as the authentication key for your bot. &lt;strong&gt;Make sure to store this token securely&lt;/strong&gt; because you’ll need it later to communicate with Telegram's API.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Install Dependencies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that you have your bot ready, it's time to set up your development environment. Before you install dependencies, you will need to create a virtual environment where the installed dependencies will reside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   python &lt;span class="nt"&gt;-m&lt;/span&gt; venv &amp;lt;YOUR-VIRTUAL_ENVIRONMENT-NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you’ll need the Python telegram library installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install &lt;strong&gt;python-telegram-bot&lt;/strong&gt; by running this in your terminal:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; pip &lt;span class="nb"&gt;install &lt;/span&gt;python-telegram-bot
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This library provides an easy way to interact with the Telegram Bot API.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Write the Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With your environment set up, it’s time to write the code for your bot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;   &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;telegram&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Update&lt;/span&gt;
   &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;telegram.ext&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ApplicationBuilder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CommandHandler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ContextTypes&lt;/span&gt;

   &lt;span class="c1"&gt;# Function that handles /start command
&lt;/span&gt;   &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Update&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ContextTypes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DEFAULT_TYPE&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
       &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Hello! I am your bot. How can I help you today?&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
       &lt;span class="c1"&gt;# Use your bot token here
&lt;/span&gt;       &lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ApplicationBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;token&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

       &lt;span class="c1"&gt;# Register the /start command
&lt;/span&gt;       &lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;CommandHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;start&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

       &lt;span class="c1"&gt;# Run the bot until you send a signal to stop
&lt;/span&gt;       &lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_polling&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
       &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;"token"&lt;/code&gt; with the bot token you got from BotFather.&lt;/li&gt;
&lt;li&gt;The CommandHandler registers the start command and runs the start function each time the command is called.&lt;/li&gt;
&lt;li&gt;In this simple example, when a user sends the &lt;code&gt;/start&lt;/code&gt; command, the bot replies with a greeting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Test the Bot&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that the code is ready, it’s time to test your bot. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Run your bot script in your terminal:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; python bot.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open Telegram, search for your bot using the unique username you created, and send the &lt;code&gt;/start&lt;/code&gt; command. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The bot should reply with your predefined message: &lt;code&gt;Hello! I am your bot. How can I help you today?&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Deploy the Bot Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now you have a functional bot 🥳. You can build upon this code to add more commands and features depending on the purpose of your bot. Once you’ve tested the bot and are happy with its functionality, it’s time to deploy it so it can run 24/7. There are a few common hosting options such as Heroku, AWS, &lt;a href="https://learn.microsoft.com/en-us/azure/app-service/quickstart-python?tabs=flask%2Cwindows%2Cazure-portal%2Cazure-cli-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli" rel="noopener noreferrer"&gt;Azure&lt;/a&gt; has a free app service plan.&lt;/p&gt;

&lt;p&gt;Ensure the bot is running continuously and can restart automatically if there are errors.&lt;/p&gt;

&lt;p&gt;Some Extra Updates That Can Be Made:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perhaps you'd like to show that the bot is typing just like a regular user, add the command below to the start function:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Send typing action to user - to show that the bot is typing
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_chat_action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chat_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;effective_chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ChatAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TYPING&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the previous example, we saw how to respond to commands (words that start with a / like /start), to respond to normal text messages, we need a message handler:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;telegram.ext&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MessageHandler&lt;/span&gt;
&lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;MessageHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TEXT&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COMMAND&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handle_message&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Have fun building!&lt;/p&gt;

</description>
      <category>telegramapi</category>
      <category>python</category>
      <category>node</category>
    </item>
    <item>
      <title>Calculating Adaptive Threshold in OpenCV</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Fri, 12 Jul 2024 02:59:27 +0000</pubDate>
      <link>https://forem.com/catheryn/calculating-adaptive-threshold-in-opencv-1hh3</link>
      <guid>https://forem.com/catheryn/calculating-adaptive-threshold-in-opencv-1hh3</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/catheryn/binary-images-image-thresholding-282"&gt;Read my article on Thresholding and Binary Images for a better background&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adaptive thresholding is a technique used to convert a grayscale image to a binary image (black and white). The threshold value is calculated for smaller regions (blocks) of the image rather than using a single global threshold value for the entire image.&lt;/p&gt;

&lt;p&gt;We can perfom adaptive thresholding in OpenCV using this method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;img = cv2.adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An explanation of the arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;src:&lt;/strong&gt; The image to be worked on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;maxValue:&lt;/strong&gt; The maximum value to use with the thresholding type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;adaptiveMethod:&lt;/strong&gt; The adaptive thresholding method to use. We have the &lt;a href="https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html" rel="noopener noreferrer"&gt;Adaptive_THRESH_MEAN_C and ADAPTIVE_THRESH_GAUSSIAN_C&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;thresholdType:&lt;/strong&gt; The type of thresholding to apply. In this article we use the THRESH_BINARY. Read more about the &lt;a href="https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html#gaa9e58d2860d4afa658ef70a9b1115576" rel="noopener noreferrer"&gt;different threshold types&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;blockSize:&lt;/strong&gt; The size of the block to calculate the threshold for.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;C:&lt;/strong&gt; A constant subtracted from the calculated mean. This constant fine-tunes the thresholding.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Mean Calculation
&lt;/h2&gt;

&lt;p&gt;First, we read the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Read the original image.
img = cv2.imread('test_image.png', cv2.IMREAD_GRAYSCALE)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us assume the image translates to these numbers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[218 217 216 221 220 220]
 [211 210 210 215 216 216]
 [212 211 211 214 216 216]
 [139 138 137 103 105 105]
 [190 190 190 170 170 170]
 [255 255 255 255 255 255]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we specify our adaptive thresholding method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;img_thresh_adp = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 7)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;img:&lt;/strong&gt; This is the image we have translated into numbers using the cv2.imread method.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;255:&lt;/strong&gt; This is the maximum value we are to use after calculations. This means that pixels that are above our result will be set to 255 (white) while the pixels that are below will be set to 0 (black)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cv2.ADAPTIVE_THRESH_MEAN_C:&lt;/strong&gt; This is the adaptiveThresholding algorithm. This method calculates the threshold for a pixel based on the mean of a certain number of pixels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cv2.THRESH_BINARY:&lt;/strong&gt; THRESH_BINARY means that pixels above the threshold value will be set to the maximum value (255), and pixels below the threshold value will be set to 0.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:&lt;/strong&gt; This means a 3 x 3 pixel area around each pixel is considered.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;7:&lt;/strong&gt; This means after calculating the mean, we subtract 7 from it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Calculation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Block 1 (Top-left corner):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider the 3x3 block starting at the top-left corner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[218 217 216]
 [211 210 210]
 [212 211 211]]

Mean: (218 + 217 + 216 + 211 + 210 + 210 + 212 + 211 + 211) / 9 = 212.8
Subtract constant from result: 212.8 - 7 = 205.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since every single number in block 1 is greater than 205, the numbers are all swapped for 255. Therefore the top left corner becomes this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[255 255 255]
 [255 255 255]
 [255 255 255]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Block 2:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider the next 3x3 block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[217 216 221]
 [210 210 215]
 [211 211 214]]

Mean: 1925 / 9 = 213.8
Subtract constant: 213.8 - 7 = 206.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every single number in this block is greater than 206 so the numbers are all swapped for 255. Therefore the block becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[255 255 255]
 [255 255 255]
 [255 255 255]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These calculations would be done row by row on a 3 x 3 basis until we have calculations for each section. If the current numbers are less than the result, we use 0 else we use 255. Also, note that the numbers are swapped for 255 only because that is what was specified as the maximum.&lt;/p&gt;

&lt;p&gt;For example, below we have specified a maximum value of 200:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;img_thresh_adp = cv2.adaptiveThreshold(img, 200, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 7)

# Calculating the top left corner block
[[218 217 216]
 [211 210 210]
 [212 211 211]]

# Calculating the mean
Mean: 1916 / 9 = 212.8
Subtract constant: 212.8 - 7 = 205.8

# The result
[[200 200 200]
 [200 200 200]
 [200 200 200]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The resulting image after will always contain a number between 0 and the maxValue specified.&lt;/p&gt;

&lt;p&gt;I hope this clarifies adaptiveThresholding for someone out there!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Binary Images &amp; Image Thresholding</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Fri, 12 Jul 2024 02:52:04 +0000</pubDate>
      <link>https://forem.com/catheryn/binary-images-image-thresholding-282</link>
      <guid>https://forem.com/catheryn/binary-images-image-thresholding-282</guid>
      <description>&lt;h2&gt;
  
  
  Thresholding
&lt;/h2&gt;

&lt;p&gt;Thresholding is a simple yet effective technique used in image processing to convert a grayscale image into a binary image. The core idea is to segment the image into two parts (usually 0 - black and 255 - white) based on a specific threshold value. Think of the threshold value as a certain number that must be exceeded (or not) for a certain result.&lt;/p&gt;

&lt;p&gt;Thresholding involves setting a threshold value that separates the pixel values of the image into two distinct groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pixels above the threshold:&lt;/strong&gt; These pixels are usually set to the maximum value (often 255 for white in binary images).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pixels below or equal to the threshold:&lt;/strong&gt; These pixels are usually set to the minimum value (often 0 for black in binary images).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Types of Thresholding
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Global Thresholding:&lt;/strong&gt; A single global threshold value is applied to the entire image.&lt;/p&gt;

&lt;p&gt;Example: Setting all pixel values above 165 to 255 (white) and those below or equal to 165 to 0 (black).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cv2.threshold(img, 165, 255, cv2.THRESH_BINARY)

An image with the following values:
[[103 105 105]
 [211 210 210]
 [212 211 211]
 [139 138 137]]

Would become:
[[0 0 0]
 [255 255 255]
 [255 255 255]
 [0 0 0]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Adaptive Thresholding:&lt;/strong&gt; The threshold value is determined for smaller regions of the image, allowing for different threshold values in different parts of the image.&lt;/p&gt;

&lt;p&gt;Example: Setting all pixel values above the calculated mean to 255 (white) and those below or equal to 165 to 0 (black).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 7)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about how Adaptive Thresholding is calculated.&lt;/p&gt;

&lt;p&gt;Adaptive thresholding is useful for images with varying lighting conditions. For each pixel, the best possible value is used which results in a clearer image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Binary Image
&lt;/h2&gt;

&lt;p&gt;A binary image is a type of image that has only two possible pixel values: 0 and 255. These values represent black and white, respectively. Binary images are used to simplify the analysis of images by reducing the complexity of the data. We use thresholding algorithms to achieve binary images.&lt;/p&gt;

&lt;h2&gt;
  
  
  Significance of Binary Images
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplification:&lt;/strong&gt; It reduces the complexity of an image by converting it to two colors, making it easier to analyze.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Segmentation:&lt;/strong&gt; In application of object detection, it can help in isolating objects from the background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Extraction:&lt;/strong&gt; It is useful for identifying and extracting specific features from an image, such as shapes or edges.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opencv</category>
      <category>imagemanipulation</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Machine Learning For Beginners</title>
      <dc:creator>Esther </dc:creator>
      <pubDate>Wed, 15 May 2024 00:08:27 +0000</pubDate>
      <link>https://forem.com/catheryn/machine-learning-for-newbies-98h</link>
      <guid>https://forem.com/catheryn/machine-learning-for-newbies-98h</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;What is Machine Learning?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Machine learning is a field in AI that revolves around training software (using large amounts of data) to act, think, and predict information the way humans do. This is why it's called &lt;a href="https://www.freecodecamp.org/news/what-is-machine-learning-for-beginners/" rel="noopener noreferrer"&gt;machine learning&lt;/a&gt;. The software being trained to make predictions is called a model.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a Model?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A machine learning model is a software program that can predict new information based on the data it was trained on. The model consists of an algorithm that helps it make predictions. For a model to start making predictions, it has to be trained. The process of passing a lot of data to the model is called training. After a lot of training, it can begin to predict nearly accurate values. This prediction is called ‘inferencing’.&lt;/p&gt;

&lt;p&gt;The training data usually consists of two things: features &amp;amp; labels. The features are the characteristics of the data while the label is the value you want to train the model to be able to predict. In Machine Learning, it is visualized as x and y: features(x), label(y).&lt;/p&gt;

&lt;p&gt;Here's a simple example: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To train a model that can predict the number of jacket sales based on weather, we would give the model information such as - The weather measurements for the day e.g temperature, these are the features (&lt;strong&gt;x&lt;/strong&gt;), and the number of jackets sold on each day, these are the labels (&lt;strong&gt;y&lt;/strong&gt;).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To implement in code, it would like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//Javascript&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;//feature&lt;/span&gt;
    &lt;span class="nx"&gt;jackets&lt;/span&gt; &lt;span class="na"&gt;sold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;//label&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;//feature&lt;/span&gt;
    &lt;span class="nx"&gt;jackets&lt;/span&gt; &lt;span class="na"&gt;sold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;//label&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;.......&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This data is sent to the model, it runs an algorithm and studies the data. To validate what the model has learned, we send more data to evaluate its learning. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Do We Actually Validate That The Model is Learning &amp;amp; What Data Do We Use?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When training the model, we can send 80% of the training data and keep 20% back for evaluation. After training, we send the evaluation data, the model makes a prediction and how far the prediction is from the correct value determines whether or not the model is learning enough &amp;amp; is ready to make predictions, kind of like a test run. The training data must be not be split in 2 ie 80%, 20%, you can train with entire data &amp;amp; test with fresh data.&lt;/p&gt;

&lt;p&gt;Using the previous jacket example, this is what the process would look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fprr7gqs5asapjmw3uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fprr7gqs5asapjmw3uc.png" alt="Machine learning cycle" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input training data on jacket sales 
-&amp;gt; model runs algorithm 
-&amp;gt; algorithm studies relationship between temperature and jacket sales
-&amp;gt; input more jacket data to model to evaluate learning 
-&amp;gt; model predict jacket sales
-&amp;gt; compare predictions with the actual validation data 
-&amp;gt; evaluate false prediction rates using special calculations 
-&amp;gt; start over.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;This is an example an algorithm to run the training data on jacket sales.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The model is usually depicted as a function that takes in an input and returns an output: y = f(x)&lt;/p&gt;

&lt;p&gt;Training a model is doing this over again with large amounts of data (this can be done with different algorithms). When a model is being trained, it runs a specific algorithm on the data to establish patterns and relationships, after training is over, the model will contain the learned patterns &amp;amp; relationships as well as algorithm configurations. Upon deployment, the model becomes a self-contained system that can process input data on its own without needing to run through an algorithm each time.&lt;/p&gt;

&lt;p&gt;As you probably guessed, training a model on bad data will yield incorrect predictions which can be very costly. Data must be clean and pass several checks to produce the desired results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of machine learning
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Supervised Machine Learning&lt;/strong&gt;: This is a form of machine learning where we provide training data consisting of both the features &amp;amp; labels. In this method, we carefully curate the data we provide the model, giving it both attributes and labels to enable it to make predictions of its own.&lt;/p&gt;
&lt;h3&gt;
  
  
  Types of Supervised Learning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Regression&lt;/strong&gt;: This is a form of supervised learning where the model is trained to predict a numeric value. e.g The number of icecreams sold on a given day (label - y) based on weather (feature - x) or the number of shirts (label - y) sold based on salary of buyer (feature - x) etc. This data is passed through an algorithm such as linear regression and the predicted values (called y-hat in math speak &lt;strong&gt;&lt;em&gt;ŷ)&lt;/em&gt;&lt;/strong&gt; are compared with the actual values and  based on that we understand how far off the model is in it's predictions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Classification&lt;/strong&gt;: This is another form of supervised learning where the model is trained to assign the label being predicted to a class. The "class" here is a terminology to mean grouping and they are predefined e.g If we wanted to predict the species of animals based on body structure, in the training data being passed to the model, we would specify animal features and their species. Based on this, after training, the model when given new data can assign it to a species and predict it as such.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Binary Classification&lt;/strong&gt;: In this type of classification, the model is trained to predict only two classes - true or false. e.g A model that helps to predict if people can afford a certain school(label) based on age, salary, inheritance, parents careers(features). When given data, this model will either respond with a true or false. We can imagine a model such as this is trained on the data of the students that are already in the school. We can use algorithms such as Logistic regression to train for this outcome.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiclass Classification&lt;/strong&gt;: In this type of classification, the model predicts input data as belonging to a single or multiple classes. e.g A model that can predict the genre of books (a book can have multiple genres). We can use algorithms such as One-vs-Rest(OVR) &amp;amp; multinomial algorithms to achieve this.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Unsupervised Machine Learning&lt;/strong&gt;: In unsupervised learning, we provide training data consisting of only features without any labels. In this method, the model itself will start to determine relationships between the features and come up with labels itself. We basically tell the model "figure it out".&lt;/p&gt;
&lt;h3&gt;
  
  
  Types of Unsupervised Learning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clustering&lt;/strong&gt;: In clustering. the model identifies similarities in the data based on features and starts to group them based on that. It is similar to multiclass classification in the fact that it groups data but the difference is in multiclass, we already know what the classes are based on previous learning but in clustering, we don’t. So we give it a bunch of data and ask it to figure it out and learn on it’s own. e.g We can provide a model a bunch of student data and it will group the data by age, gender, class, tuition, address etc. We can use algorithms such as K-Means clustering for clustering.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a really simplified background on machine learning to introduce newbies to the field. There's a lot of technical speak surrounding AI today and I hope this article helps a confused person out there. &lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>models</category>
    </item>
  </channel>
</rss>
