<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ali</title>
    <description>The latest articles on Forem by Ali (@jubeiargh).</description>
    <link>https://forem.com/jubeiargh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jubeiargh"/>
    <language>en</language>
    <item>
      <title>Integrate Real-Time Financial and Geopolitical News into Make.com Workflows with finlight</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Tue, 12 Aug 2025 08:00:00 +0000</pubDate>
      <link>https://forem.com/jubeiargh/integrate-real-time-financial-and-geopolitical-news-into-makecom-workflows-with-finlight-2fec</link>
      <guid>https://forem.com/jubeiargh/integrate-real-time-financial-and-geopolitical-news-into-makecom-workflows-with-finlight-2fec</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post, you’ll learn how to integrate &lt;strong&gt;real-time financial and geopolitical news&lt;/strong&gt; into your Make.com workflows using the &lt;strong&gt;&lt;a href="https://finlight.me/integrations/make" rel="noopener noreferrer"&gt;finlight API&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ll cover how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger automations instantly when market events happen&lt;/li&gt;
&lt;li&gt;Run scheduled news searches for reporting or research&lt;/li&gt;
&lt;li&gt;Filter results at the source so only relevant data reaches your workflows&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Until now, Make.com scenarios had no direct and reliable feed for structured financial or geopolitical events.&lt;/p&gt;

&lt;p&gt;If you wanted an alert for a corporate earnings report or a major political development, you had to use RSS feeds, scrapers, or delayed APIs — messy and often too slow.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: finlight + Make.com
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;&lt;a href="https://finlight.me/integrations/make" rel="noopener noreferrer"&gt;finlight Make.com integration&lt;/a&gt;&lt;/strong&gt; adds two key modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhook Trigger&lt;/strong&gt; &lt;em&gt;(Pro tier)&lt;/em&gt; – Fires instantly when a new article matches your query.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search Module&lt;/strong&gt; &lt;em&gt;(All tiers)&lt;/em&gt; – Retrieves targeted news articles on demand or on a schedule.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can filter by &lt;code&gt;ticker&lt;/code&gt;, &lt;code&gt;exchange&lt;/code&gt;, &lt;code&gt;source&lt;/code&gt;, keywords, and more, all server-side, so your Make.com scenario only processes what’s relevant.&lt;/p&gt;

&lt;p&gt;📄 Full API and query documentation: &lt;a href="https://docs.finlight.me" rel="noopener noreferrer"&gt;docs.finlight.me&lt;/a&gt;&lt;br&gt;
🔗 Official Make.com integration page: &lt;a href="https://make.com/integrations/finlight" rel="noopener noreferrer"&gt;make.com/integrations/finlight&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Example 1: Real-Time Tesla and Apple Breaking News Alert
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Trigger:&lt;/strong&gt; Webhook with advanced query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(+ticker:TSLA OR +ticker:AAPL) AND ("earnings" OR "quarterly results") AND NOT crypto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Actions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Send a formatted message to Slack.&lt;/li&gt;
&lt;li&gt;Log the event in Google Sheets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor portfolio companies in real time&lt;/li&gt;
&lt;li&gt;Alert marketing or sales teams about client news&lt;/li&gt;
&lt;li&gt;Track competitor announcements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Screenshots:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;finlight node configuration&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dh29ckqap3x4p65humz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dh29ckqap3x4p65humz.png" alt="Finlight node in Make.com for Tesla and Apple breaking news alert" width="800" height="824"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Full Make.com scenario view&lt;/em&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F724cckqhy98wpxif5p0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F724cckqhy98wpxif5p0p.png" alt="Complete Make.com scenario for Tesla and Apple breaking news alert using finlight" width="800" height="740"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Example 2: Weekly Market Trends Report
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Trigger:&lt;/strong&gt; Schedule every Monday at 9 AM&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Search Query:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;("mergers and acquisitions" OR "market trends" OR "economic outlook") AND (exchange:NASDAQ OR exchange:NYSE)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Actions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Aggregate articles from the past week and sort by relevance.&lt;/li&gt;
&lt;li&gt;Create a Google Doc or PDF with headlines and summaries.&lt;/li&gt;
&lt;li&gt;Email the document to marketing, research, or strategy teams.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weekly market briefings&lt;/li&gt;
&lt;li&gt;Research content for blogs or newsletters&lt;/li&gt;
&lt;li&gt;Automated client update reports&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Screenshots:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;finlight node configuration&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sbuldf55axjxrmnr88f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sbuldf55axjxrmnr88f.png" alt="Finlight node in Make.com for weekly market trends report" width="800" height="884"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Full Make.com scenario view&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9lznqtf4zjvlu0u9vul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9lznqtf4zjvlu0u9vul.png" alt="Complete Make.com scenario for weekly market trends report using finlight" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How to Try It
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a finlight account and check the &lt;a href="https://docs.finlight.me" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Connect finlight to Make.com via the &lt;a href="https://make.com/integrations/finlight" rel="noopener noreferrer"&gt;integration page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Build your first scenario using either the Webhook trigger (Pro tier) or Search module (all tiers).&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;💬 Have you built something cool with this? Share your scenarios in the comments — I’d love to see how you’re using real-time market and geopolitical data in your workflows.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>nocode</category>
      <category>makecom</category>
      <category>news</category>
    </item>
    <item>
      <title>Streaming Financial Data in Real-Time with Finlight’s WebSocket API</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Wed, 30 Apr 2025 10:02:21 +0000</pubDate>
      <link>https://forem.com/jubeiargh/streaming-financial-data-in-real-time-with-finlights-websocket-api-ae6</link>
      <guid>https://forem.com/jubeiargh/streaming-financial-data-in-real-time-with-finlights-websocket-api-ae6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ghhgkgzky58mbuybjkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ghhgkgzky58mbuybjkq.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking to supercharge your financial applications with &lt;strong&gt;real-time news updates&lt;/strong&gt;? Whether you’re building &lt;strong&gt;trading dashboards&lt;/strong&gt; , &lt;strong&gt;market monitoring tools&lt;/strong&gt; , or &lt;strong&gt;alert systems&lt;/strong&gt; , Finlight’s WebSocket API has you covered.&lt;/p&gt;

&lt;p&gt;Finlight is a powerful API platform for delivering &lt;strong&gt;real-time financial, market, and geopolitical news&lt;/strong&gt; , designed for developers who want &lt;strong&gt;fast, filtered, and flexible access&lt;/strong&gt; to the stories that move markets.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✨ What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;In this guide, you’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to Finlight’s WebSocket API&lt;/li&gt;
&lt;li&gt;Authenticate securely&lt;/li&gt;
&lt;li&gt;Subscribe to &lt;strong&gt;live news streams&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Keep your connection healthy with &lt;strong&gt;ping-pong&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Understand how it compares to REST&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end, you’ll have a working real-time stream of relevant financial news — perfect for trading, analytics, or alerts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup &amp;amp; Requirements
&lt;/h3&gt;

&lt;p&gt;Before we dive in, here’s what you need:&lt;/p&gt;

&lt;p&gt;✅ A &lt;strong&gt;Finlight API Key&lt;/strong&gt;  — grab one from your dashboard at &lt;a href="https://app.finlight.me" rel="noopener noreferrer"&gt;app.finlight.me&lt;/a&gt;&lt;br&gt;&lt;br&gt;
✅ Basic JavaScript or WebSocket knowledge&lt;br&gt;&lt;br&gt;
✅ Node.js (if using the SDK) or your preferred WebSocket-compatible language&lt;/p&gt;

&lt;p&gt;You can use our &lt;strong&gt;official SDKs&lt;/strong&gt; or roll your own integration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt; : &lt;a href="https://www.npmjs.com/package/finlight-client" rel="noopener noreferrer"&gt;finlight-client&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt; : &lt;a href="https://www.piwheels.org/project/finlight-client/" rel="noopener noreferrer"&gt;finlight-client&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  🔍 REST API vs WebSocket
&lt;/h3&gt;

&lt;p&gt;Here’s how the two options compare — choose the one that fits your real-time needs best:&lt;/p&gt;
&lt;h4&gt;
  
  
  REST API
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Access&lt;/strong&gt; : Fetch articles on demand via HTTP requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sorting&lt;/strong&gt; : Articles are returned sorted by their publishDate (i.e., when the news provider originally published them)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delayed Articles&lt;/strong&gt; : Late-arriving stories (e.g., from aggregators like Yahoo) appear in the past based on their original publish date&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Capable?&lt;/strong&gt; Yes — &lt;em&gt;if polled frequently&lt;/em&gt; and tracked carefully&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best For&lt;/strong&gt; : Custom polling systems, historical queries, or hybrid real-time setups&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  WebSocket API
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Access&lt;/strong&gt; : Stream new articles in real time as they are ingested into Finlight&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sorting&lt;/strong&gt; : No manual sorting needed — articles are pushed in the order they enter the system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delayed Articles&lt;/strong&gt; : Even late stories appear immediately upon arrival (regardless of publish date)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Capable?&lt;/strong&gt; Yes — native real-time by design, no polling required&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best For&lt;/strong&gt; : Dashboards, trading bots, alert systems, or anything needing instant updates&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;💡&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Key Insight&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;: REST is great if you need on-demand control or history.&lt;br&gt;&lt;br&gt;
But if you want true push-based real-time without manual effort, WebSocket is the way to go.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  🔐 Authentication
&lt;/h3&gt;

&lt;p&gt;Every WebSocket connection must include your &lt;strong&gt;API key&lt;/strong&gt; in the header:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;headers: {
  'x-api-key': '&amp;lt;your-api-key&amp;gt;',
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 You can find your API key at &lt;a href="https://app.finlight.me/" rel="noopener noreferrer"&gt;https://app.finlight.me&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  WebSocket Endpoint
&lt;/h3&gt;

&lt;p&gt;Connect to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wss://wss.finlight.me
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Real-Time News in Action (Node.js Example)
&lt;/h3&gt;

&lt;p&gt;Here’s how easy it is to stream live financial news using our npm client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { FinlightApi } from "finlight-client";

const client = new FinlightApi({
  apiKey: "&amp;lt;your-api-key&amp;gt;",
});
client.websocket.connect(
  {
    // 'query' is optional – if omitted, you receive all incoming articles
    query: "Tesla",
    language: "en",
    extended: true,
  },
  article =&amp;gt; {
    console.log(article);
  }
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Available Query Parameters
&lt;/h3&gt;

&lt;p&gt;You can filter the incoming stream by these optional parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  query?: string; // e.g., "interest rates" – omit to receive all articles
  sources?: string[]; // e.g., ["www.reuters.com", "www.ft.com"]
  language?: string; // default: "en"
  extended?: boolean; // full article text if true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;query is optional — leave it out to stream &lt;strong&gt;everything&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;sources supports multiple domains&lt;/li&gt;
&lt;li&gt;extended: true gives you full article text (if available), summaries are provided in both versions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Keeping the Connection Alive (Ping &amp;amp; Reconnect)
&lt;/h3&gt;

&lt;p&gt;Finlight’s WebSocket API is powered by &lt;strong&gt;AWS API Gateway WebSockets&lt;/strong&gt; , which means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connections are &lt;strong&gt;automatically closed after 2 hours&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The server will disconnect you after &lt;strong&gt;10 minutes of inactivity&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Send Pings Every ~8 Minutes
&lt;/h4&gt;

&lt;p&gt;Send this JSON message to keep your connection alive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "action": "ping" }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll receive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "action": "pong" }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;setInterval(() =&amp;gt; {
  if (socket.readyState === WebSocket.OPEN) {
    socket.send(JSON.stringify({ action: "ping" }));
  }
}, 8 * 60 * 1000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reconnect Gracefully
&lt;/h3&gt;

&lt;p&gt;If the server closes the connection (after 2h or inactivity), reconnect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;socket.onclose = () =&amp;gt; {
  console.log("Connection closed. Reconnecting...");
  setTimeout(() =&amp;gt; {
    connectWebSocket(); // your connection logic here
  }, 1000);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to restart your ping interval after reconnecting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Article Response
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "title": "Trump tariffs could lead to a summer drop-off...",
  "source": "www.cnbc.com",
  "publishDate": "2025-04-20T18:02:08.000Z",
  "language": "en",
  "confidence": 0.9999,
  "sentiment": "negative",
  "summary": "...",
  "content": "Full article content if `extended` is true..."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  💳 WebSocket Pricing &amp;amp; Connection Limits
&lt;/h3&gt;

&lt;p&gt;Your Finlight subscription tier determines how many simultaneous WebSocket connections you can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pro Standard&lt;/strong&gt;  — Includes &lt;strong&gt;1 WebSocket connection&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro Scale&lt;/strong&gt;  — Includes &lt;strong&gt;3 WebSocket connections&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Tier&lt;/strong&gt;  — Supports &lt;strong&gt;custom limits&lt;/strong&gt; , ideal for enterprise or high-volume setups&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts &amp;amp; Use Cases
&lt;/h3&gt;

&lt;p&gt;With Finlight’s WebSocket API, you can build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📊 &lt;strong&gt;Live trading dashboards&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🚨 &lt;strong&gt;Real-time alert systems&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🧠 &lt;strong&gt;News sentiment analytics&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🤖 &lt;strong&gt;Automated trading or LLM agents&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Explore more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://finlight.me/" rel="noopener noreferrer"&gt;finlight.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/finlight-client" rel="noopener noreferrer"&gt;npm: finlight-client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.piwheels.org/project/finlight-client/" rel="noopener noreferrer"&gt;Python on piwheels.org&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Got questions? Curious to see more examples? Want to share what you’re building?&lt;/p&gt;

&lt;p&gt;Tell me here or at our &lt;a href="https://discord.gg/XUs9JYZd24" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>api</category>
      <category>newsaggregation</category>
      <category>finance</category>
      <category>realtimedata</category>
    </item>
    <item>
      <title>Scaling Search at finlight.me: From Postgres Full-Text to Real-Time OpenSearch</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Wed, 30 Apr 2025 10:00:00 +0000</pubDate>
      <link>https://forem.com/jubeiargh/scaling-search-at-finlightme-from-postgres-full-text-to-real-time-opensearch-33op</link>
      <guid>https://forem.com/jubeiargh/scaling-search-at-finlightme-from-postgres-full-text-to-real-time-opensearch-33op</guid>
      <description>&lt;p&gt;&lt;em&gt;Scaling search isn't just about adding bigger servers — sometimes you need the right tools.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When we first launched &lt;a href="https://finlight.me" rel="noopener noreferrer"&gt;finlight.me&lt;/a&gt;, our real-time financial news API, Postgres full-text search was more than enough. It was fast, easy to set up, and fit perfectly into our simple early architecture. But as the number of articles grew and search demands became more complex, cracks started to appear. In this article, I'll share how we moved from Postgres to OpenSearch, the challenges we faced along the way, and why keeping Postgres as our source of truth turned out to be one of our best decisions.&lt;/p&gt;

&lt;p&gt;It all started with a simple full-text search setup inside Postgres that worked surprisingly well — until it didn’t.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Architecture: Postgres Full-Text Search
&lt;/h2&gt;

&lt;p&gt;In the early days, we used Postgres’ built-in full-text search to power article queries. Titles and content were combined into a single &lt;code&gt;tsvector&lt;/code&gt; field, allowing us to search efficiently without worrying about casing, suffixes, or keyword order — limitations that basic &lt;code&gt;%query%&lt;/code&gt; searches would have struggled with. Incoming search queries were also converted into vectors, and Postgres did a solid job of ranking and returning relevant results. For a while, this setup handled our needs with fast response times and minimal overhead. It was simple, integrated, and worked well alongside the rest of our ingestion and storage system.&lt;/p&gt;

&lt;p&gt;But as our article volume started to grow and user queries became more complex, we began to notice cracks forming beneath the surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pain Points with Scaling Postgres
&lt;/h2&gt;

&lt;p&gt;At first, Postgres full-text search handled our growing dataset reasonably well. But as article counts climbed into the hundreds of thousands, search performance started to noticeably degrade. The major issue wasn’t just the full-text search itself — it was how users combined free-text queries with additional filters like publish date ranges, specific sources, or metadata fields. Postgres was strong at indexing individual fields, and its &lt;code&gt;GIN&lt;/code&gt; index accelerated full-text search. However, we quickly ran into a hard limitation: Postgres doesn’t allow combining a &lt;code&gt;GIN&lt;/code&gt; index with a regular &lt;code&gt;B-Tree&lt;/code&gt; index in a composite index. This meant we couldn’t optimize both kinds of queries at the same time, forcing the database to either pick a suboptimal plan or fall back to sequential scans — both of which became painfully slow as the dataset grew.&lt;/p&gt;

&lt;h2&gt;
  
  
  Index Management Nightmare: Optional Parameters and Growing Complexity
&lt;/h2&gt;

&lt;p&gt;The flexibility of our API — allowing users to combine any subset of filters like publish date, source, free-text search and more — introduced another layer of scaling challenges. Since every search parameter was optional, we faced a combinatorial explosion of possible query patterns. To maintain acceptable performance, we had to create different indexes to support the most common combinations of parameters. Each time we added a new searchable field, it required designing new indexes, analyzing query plans with &lt;code&gt;EXPLAIN ANALYZE&lt;/code&gt;, and validating performance manually. This constant index tuning became tedious and unsustainable. Worse, despite all the effort, the core limitation remained: we still couldn’t efficiently optimize full-text search combined with metadata filters in a single query.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pagination Collapse and User-Visible Slowness
&lt;/h2&gt;

&lt;p&gt;As article volumes continued to grow, another performance bottleneck surfaced: pagination. Our API exposed a &lt;code&gt;page&lt;/code&gt; parameter that mapped directly to SQL &lt;code&gt;OFFSET&lt;/code&gt; behavior behind the scenes. While this worked fine at low offsets, performance deteriorated rapidly as users requested deeper pages. Ironically, the very feature that should have made searches faster — returning just a small slice of results — ended up making things slower. Each paginated request forced Postgres to scan, count, and skip thousands of rows before it could even start returning results, recalculating large parts of the query plan every time. Queries that once took under a second ballooned to tens of seconds. At that point, it was no longer just an infrastructure problem — it became a user experience failure. We realized that even a well-tuned Postgres setup wouldn't be enough to support fast, flexible search at the scale we were growing toward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Chose OpenSearch
&lt;/h2&gt;

&lt;p&gt;It was clear that we needed a system built specifically for search — something optimized for free-text queries, filtering, and fast pagination at scale. Having worked with Elasticsearch during previous freelance projects, we were already familiar with the strengths of dedicated search engines: inverted indexes, efficient scoring algorithms, and powerful query flexibility. We decided to adopt OpenSearch, the community-driven fork of Elasticsearch, both for its strong technical capabilities and its more favorable licensing model.&lt;/p&gt;

&lt;p&gt;At the same time, we made an important architectural decision: to separate the write path from the read path. Postgres would remain our single source of truth for ingested and processed articles, ensuring data integrity and consistency. OpenSearch would serve as the read-optimized layer, delivering fast and flexible search without overloading our ingestion pipeline. This allowed us to use the best tool for each requirement — a reliable, relational, normalized database for storage and ingestion, and a high-performance search engine for querying — instead of trying to force one system to do everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Phase: Starting Small and Learning Fast
&lt;/h2&gt;

&lt;p&gt;Before fully committing to production, we rolled out OpenSearch in a minimal-resource testing setup: a single-node cluster with limited RAM, intended purely for evaluation and tuning. In this environment, we quickly encountered behaviors that hinted at the system's scaling needs. Over time, we noticed missing indexes and degraded search performance — symptoms likely caused by memory pressure and resource eviction events on the hosting side. Far from being a setback, these early tests validated an important lesson: while OpenSearch could deliver the performance we needed, it demanded production-grade resources to do so reliably. Testing lean allowed us to tune index mappings, validate query performance, and plan capacity based on real behavior. It also reinforced our architectural choice to keep Postgres as the source of truth, ensuring that even if the search layer needed recovery or rebuilding, the core data remained safe and consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling OpenSearch for Production
&lt;/h2&gt;

&lt;p&gt;Armed with insights from our testing phase, we moved to a production-grade OpenSearch deployment with the resources needed to match our growth. We added multiple nodes to the cluster, allocated sufficient RAM, and tuned index mappings to optimize both write and query performance. With the new setup, search response times dropped dramatically — even complex queries with deep pagination returned results in milliseconds instead of seconds.&lt;/p&gt;

&lt;p&gt;The overall data flow evolved as well: after articles pass through our real-time article processing pipeline — where they are collected, enriched, and analyzed — they are immediately fed into OpenSearch for fast retrieval. Postgres remains the single source of truth, storing all raw and processed data reliably, while OpenSearch acts as the read-optimized layer tuned for search performance.&lt;/p&gt;

&lt;p&gt;We also introduced regular snapshotting of the OpenSearch indices, ensuring that even as the article base grew, we could recover quickly from failures or rebuild indexes without downtime. Treating OpenSearch as an advanced cache rather than the primary database gave us flexibility: we could evolve search schemas, rebuild indexes, or adjust mappings without putting core data integrity at risk. Over time, as traffic increased and our dataset expanded, the new architecture continued to perform reliably under load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Today’s Architecture: Resilient, Real-Time Search
&lt;/h2&gt;

&lt;p&gt;Today, our system cleanly separates responsibilities between ingestion, storage, and retrieval. Postgres continues to act as the single source of truth, reliably storing all raw and processed article data in a normalized relational structure. Articles flow through our real-time article processing pipeline — where they are scraped, enriched, and analyzed — before being fed into OpenSearch for optimized search performance. OpenSearch handles all user-facing search queries, allowing us to deliver fast, flexible results even under high load. Regular snapshotting, thoughtful index management, and a multi-node deployment ensure that our search infrastructure remains resilient and scalable as our data set grows. By decoupling the write and read sides of the architecture and choosing the best tool for each need, we've built a system that is fast, reliable, and ready for continued growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned: Advice for Builders
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start simple, but design with scale in mind.&lt;/strong&gt; Postgres full-text search served us well early on — but flexibility in design made migration possible later without major pain.&lt;/li&gt;
&lt;li&gt;Separate read and write paths as early as practical. Trying to make a single database handle everything becomes exponentially harder as complexity grows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use the right tool for the job.&lt;/strong&gt; A relational database excels at storage and consistency; a search engine excels at flexible retrieval and ranking.&lt;/li&gt;
&lt;li&gt;Don’t underestimate optional query complexity. Supporting flexible API filters sounds simple until you have to index every possible combination.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test lean, scale smart.&lt;/strong&gt; Early testing with minimal resources taught us what production-grade OpenSearch really needed — and avoided costly surprises.&lt;/li&gt;
&lt;li&gt;Keep a reliable source of truth. Having Postgres behind OpenSearch allowed us to rebuild, heal, and extend our search infrastructure without risking core data integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building scalable APIs or working with large search datasets, I'd love to hear how you're approaching similar challenges. Feel free to share your thoughts or experiences in the comments!&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>postgres</category>
      <category>opensearch</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>How I Built a Multi-Agent AI Analyst Bot Using GPT, LangGraph &amp; Market News APIs</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Wed, 23 Apr 2025 10:01:28 +0000</pubDate>
      <link>https://forem.com/jubeiargh/how-i-built-a-multi-agent-ai-analyst-bot-using-gpt-langgraph-market-news-apis-4gmm</link>
      <guid>https://forem.com/jubeiargh/how-i-built-a-multi-agent-ai-analyst-bot-using-gpt-langgraph-market-news-apis-4gmm</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Yes, I know — this title sounds like one of those overly long anime names. “That Time I Built a GPT Bot That Read the Financial News So I Didn’t Have To” 🧙‍♂️📉&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1. Intro: The Pain
&lt;/h3&gt;

&lt;p&gt;I used to wake up and check four different sites, scroll Twitter, and still feel behind on what actually happened in the markets.&lt;/p&gt;

&lt;p&gt;Now, I get one clean email every morning with exactly what I need: key headlines, a few bullet points of summary, and a tone check on the overall market mood — powered by GPT and a financial news API.&lt;/p&gt;

&lt;p&gt;Here’s how I built it in under an hour — and how you can too.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The System
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;🛠&lt;/em&gt; &lt;strong&gt;Stack_:_&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Finlight.me&lt;/strong&gt;  — Financial news API for clean, market-relevant headlines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangGraph&lt;/strong&gt;  — Multi-agent GPT orchestration framework (built on LangChain)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt;  — With a cron job or AWS Lambda for scheduling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SMTP&lt;/strong&gt;  — To deliver the final email briefing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;🧠&lt;/em&gt; &lt;strong&gt;Agents_:_&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Analyst Agent&lt;/strong&gt;
Pulls news for a given subject → summarizes → classifies tone (bullish, bearish, neutral)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composer Agent&lt;/strong&gt;
Takes all outputs from the Analyst and assembles the final morning email&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;🔄&lt;/em&gt; &lt;strong&gt;Flow_:_&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;⏰ Script runs daily at  &lt;strong&gt;6:45am&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;For each subject (e.g., &lt;em&gt;Inflation&lt;/em&gt;, &lt;em&gt;AI Stocks&lt;/em&gt;) the &lt;strong&gt;Analyst Agent&lt;/strong&gt; :
&lt;/li&gt;
&lt;li&gt;Queries relevant news via API
&lt;/li&gt;
&lt;li&gt;Summarizes key points
&lt;/li&gt;
&lt;li&gt;Assesses sentiment (bullish/bearish/neutral)&lt;/li&gt;
&lt;li&gt;After all subjects are processed:
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Composer Agent&lt;/strong&gt; creates the full morning briefing&lt;/li&gt;
&lt;li&gt;📬 Email lands in your inbox before  &lt;strong&gt;7:00am&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. What the resulting email looks like
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🗓 April 13, 2025 – Market Briefing

• Trump Tariffs Escalate 🇺🇸🇨🇳
   - The U.S. imposed a 145% tariff on Chinese imports, while China retaliated with a 125% tariff, urging a complete removal of these measures.
   - U.S. Commerce Secretary hinted at upcoming tariffs on semiconductors to promote domestic production.
   - Despite concerns of inflation, layoffs, and supply chain disruptions, exemptions on smartphones and laptops provided a temporary boost for tech giants like Apple and Nvidia.
   - Market sentiment remains highly volatile with fears of a U.S. recession and worsening global trade dynamics.

• China Tightens Its Stance 🇨🇳
   - China halted critical rare earth exports, exacerbating the trade war and affecting key industries like semiconductors and aerospace.
   - Beijing dismissed U.S. tariff exemptions as insufficient and continued to push for cancellation of all reciprocal tariffs.
   - On the global stage, India’s and Brazil’s industries are reaping benefits as alternatives in electronics and agriculture.
   - Allegations of Chinese cyberattacks targeting U.S. infrastructure have further strained relations.
   - Domestically, Hong Kong’s last major opposition party moved towards disbandment, while Britain rejected Chinese involvement in its steel sector, fuelling political tensions.

📊 Sentiment: Uncertain and risk-averse, with increasing global market volatility. Investors are eyeing alternative regions like India for stability.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what lands in my inbox every day. 30 seconds to understand the market mood.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Inside the System: The Multi-Agent Morning Briefing
&lt;/h3&gt;

&lt;p&gt;Now that you’ve seen the output, let’s unpack &lt;em&gt;how&lt;/em&gt; it works under the hood.&lt;/p&gt;

&lt;p&gt;At a high level, this is a &lt;strong&gt;multi-agent workflow&lt;/strong&gt; that runs on a schedule. Each agent handles a specific task — fetching news, summarizing, analyzing tone, composing a message — and they all pass info via shared state.&lt;/p&gt;

&lt;p&gt;Everything is built in Python using &lt;a href="https://github.com/langchain-ai/langgraph" rel="noopener noreferrer"&gt;LangGraph&lt;/a&gt;, a framework for agent orchestration.&lt;/p&gt;

&lt;h4&gt;
  
  
  🔌 4.1: Pulling News with the Finlight API
&lt;/h4&gt;

&lt;p&gt;We use a small wrapper around a financial news API — &lt;a href="https://finlight.me/" rel="noopener noreferrer"&gt;Finlight.me&lt;/a&gt; — to get clean, focused headlines for a given subject.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why Finlight? Because it gives you way less noise than other general news APIs. And also… because I built it. 😎&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s the tool I use inside the system — it wraps the Finlight SDK as a LangChain-compatible StructuredTool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# tools/finlight.py
from langchain.tools import StructuredTool
from finlight_client import FinlightApi
from finlight_client.models import BasicArticleResponse, ApiConfig, GetArticlesParams
from pydantic import BaseModel, Field, field_validator

from market_briefing.config import FINLIGHT_API_KEY

class GetBasicArticle(BaseModel):
    query: str = Field(..., description="Search term, e.g., 'Nvidia'")
    from_: str = Field(..., alias="from", description="ISO 8601 start time")
    to: str = Field(..., description="ISO 8601 end time")
    pageSize: int = Field(100, description="Number of articles per page")
    page: int = Field(1, description="Page number")

    model_config = {"populate_by_name": True}

def search_finlight_articles(params: GetBasicArticle) -&amp;gt; str:
    client = FinlightApi(config=ApiConfig(api_key=FINLIGHT_API_KEY))

    api_response: BasicArticleResponse = client.articles.get_basic_articles(
        params=params
    )

    if not api_response:
        return "No articles found."

    return "\n".join(
        f"Title: {a['title']}\nDate: {a['publishDate']}\nSummary: {a['summary'] or 'No summary.'}\n{'-'*30}"
        for a in api_response["articles"]
    )

search_tool = StructuredTool.from_function(
    func=search_finlight_articles,
    name="search_finlight_articles",
    description="Search Finlight for news by keyword and date range",
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows agents in your system to call Finlight dynamically using structured input and output — with proper validation via Pydantic, and easy chaining through LangChain or LangGraph.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;🧠&lt;/em&gt; Note: &lt;em&gt;This input will later be fed into GPT for summarization and sentiment.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.2: The Shared State
&lt;/h4&gt;

&lt;p&gt;All agents share one data structure to read/write context. It’s a simple Python TypedDict, but it powers the whole pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from typing import List, Dict, TypedDict

class BriefingState(TypedDict):
    subjects: List[str]
    current_index: int
    analyst_outputs: Dict[str, str]
    briefing: str
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;subjects: list of user-defined topics (like ["inflation", "oil", "semiconductors"])&lt;/li&gt;
&lt;li&gt;current_index: used for looping through one subject at a time&lt;/li&gt;
&lt;li&gt;analyst_outputs: filled in by the Analyst Agent and consumed by the Composer&lt;/li&gt;
&lt;li&gt;briefing: filled in by the Composer Agent with the final result&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;🧠&lt;/em&gt; Why this matters: &lt;em&gt;it allows us to loop over multiple topics&lt;/em&gt; with a single reusable agent_, and conditionally switch to the next phase when done._&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.3: The LangGraph Logic
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwljae5rgdhe4c0irs3mz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwljae5rgdhe4c0irs3mz.png" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram shows the core agent flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Analyst Agent&lt;/strong&gt; is the workhorse. It loops through each subject (like “Inflation” or “AI Stocks”), pulling and analyzing news one at a time.&lt;/li&gt;
&lt;li&gt;After each subject, it either:
&lt;/li&gt;
&lt;li&gt;🔁 Loops back to process the next subject
&lt;/li&gt;
&lt;li&gt;➡️ Passes control to the &lt;strong&gt;Composer Agent&lt;/strong&gt; once all subjects are processed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composer Agent&lt;/strong&gt; then compiles the final email briefing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This loop-until-done + handoff pattern is a core use case for LangGraph: small, focused agents collaborating via shared state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langgraph.graph import StateGraph, END
from market_briefing.workflow.state import BriefingState
from market_briefing.agents.analyst import analyst_agent_node
from market_briefing.agents.composer import compose_briefing_node

def increment_index_node(state: BriefingState) -&amp;gt; dict:
    return {"current_index": state["current_index"] + 1}

def should_continue(state: BriefingState) -&amp;gt; str:
    return (
        "continue" if state["current_index"] + 1 &amp;lt; len(state["subjects"]) else "format"
    )

def build_graph():
    graph = StateGraph(BriefingState)

    graph.add_node("analyst", analyst_agent_node)
    graph.add_node("increment_index", increment_index_node)
    graph.add_node("composer", compose_briefing_node)

    graph.add_conditional_edges(
        "analyst",
        should_continue,
        {"continue": "increment_index", "format": "composer"},
    )

    graph.add_edge("increment_index", "analyst")
    graph.set_entry_point("analyst")
    graph.add_edge("composer", END)

    return graph.compile()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This defines a clean LangGraph flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Start with the Analyst Agent&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;If there are more topics, &lt;strong&gt;increment index and repeat&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;When all topics are done, &lt;strong&gt;switch to Composer Agent&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Finish at END&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;🧠&lt;/em&gt; Modular and readable. Each agent focuses on one job — no giant monolith functions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.4: The Analyst Agent
&lt;/h4&gt;

&lt;p&gt;This is the workhorse. For each subject, it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pulls relevant news from Finlight&lt;/li&gt;
&lt;li&gt;Passes it to GPT for summarization&lt;/li&gt;
&lt;li&gt;Extracts sentiment&lt;/li&gt;
&lt;li&gt;Stores the result in the state&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s the full code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from datetime import datetime, timezone
from langchain_core.messages import HumanMessage
from market_briefing.llm.executor import agent_executor
from market_briefing.workflow.state import BriefingState
import logging

logger = logging.getLogger( __name__ )

def analyst_agent_node(state: BriefingState) -&amp;gt; dict:
    subject = state["subjects"][state["current_index"]]

    now = datetime.now(timezone.utc)
    from_time = now.replace(hour=0, minute=0, second=0, microsecond=0)
    to_time = from_time.replace(day=from_time.day + 1)

    iso_from = from_time.isoformat() + "Z"
    iso_to = to_time.isoformat() + "Z"

    prompt = f"""
    Summarize financial/political developments on '{subject}' in the last 24h (from {iso_from} to {iso_to}).
    Include what happened, market sentiment, and confidence if available. Keep it short.
    """

    logger.info(f"📤 Prompt to analyst agent:\n{prompt}")

    result = agent_executor.invoke({"messages": [HumanMessage(content=prompt)]})
    outputs = state.get("analyst_outputs", {})
    outputs[subject] = result["messages"][-1].content

    logger.info(f"📝 Final formatted briefing:\n{outputs[subject]}")

    return {"analyst_outputs": outputs}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;🧠&lt;/em&gt; Prompt Design Tip: &lt;em&gt;Keep structure consistent so parsing is easy. You want predictable outputs if you ever want to use this downstream.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.5: The Composer Agent
&lt;/h4&gt;

&lt;p&gt;Once the loop is done, the &lt;strong&gt;Composer&lt;/strong&gt; takes over. It assembles the full email — Markdown-style — by summarizing all the analyst outputs into a single readable message.&lt;/p&gt;

&lt;p&gt;Here’s the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from datetime import datetime, timezone
from langchain_core.messages import HumanMessage
from market_briefing.llm.executor import agent_executor
import logging

logger = logging.getLogger( __name__ )

def compose_briefing_node(state: dict) -&amp;gt; dict:
    now = datetime.now(timezone.utc).strftime("%B %d, %Y")

    logger.info("🧩 Composing final morning market briefing...")
    logger.info(f"📦 Current subjects and summaries: {state['analyst_outputs']}")

    joined_summaries = "\n\n".join(
        f"🔹 {subj}\n{summary.strip()}"
        for subj, summary in state["analyst_outputs"].items()
        if summary and isinstance(summary, str)
    )

    formatting_prompt = f"""
You are a professional financial news editor. Format the following topic summaries into a clean, well-presented morning market briefing.

------------

✅ Do:
- Use headlines
- Add emojis and good spacing
- Improve clarity where needed
- Skip sections that have no information available
- Dont repeat yourself

🚫 Do not:
- Guess or fill in missing content
- Say anything unrelated to the actual summaries

🗓️ Date: {now}

------------

Use this template:

🗓 &amp;lt;Date in Month, Day - Year&amp;gt; – Market Briefing

• &amp;lt;Bullet Point 1&amp;gt;
• &amp;lt;Bullet Point 2&amp;gt;
...
• &amp;lt;Bullet Point n&amp;gt;

📊 Sentiment: &amp;lt;General sentiment&amp;gt;

------------

Here are the summaries:

{joined_summaries}
"""

    logger.info(f"📤 Prompt to composer agent:\n{formatting_prompt}")

    result = agent_executor.invoke(
        {"messages": [HumanMessage(content=formatting_prompt)]}
    )
    final_message = result["messages"][-1].content

    logger.info(f"📝 Final formatted briefing:\n{final_message}")

    return {"briefing": final_message}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;💡&lt;/em&gt; This is where tone and readability matter. &lt;em&gt;The analyst agent gives raw data — the composer agent gives it polish.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.6: Running the Whole Thing
&lt;/h4&gt;

&lt;p&gt;You trigger the whole workflow from a handler script. This is where subjects are defined and the LangGraph pipeline is run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from market_briefing.config import SUBJECTS
from market_briefing.utils.html_formatter import markdown_to_html
from market_briefing.sender.email_sender import send_daily_email
from market_briefing.workflow.graph import build_graph
import logging

logging.basicConfig(level=logging.INFO) 

def main(event=None, context=None):
    state = {
        "subjects": SUBJECTS,
        "current_index": 0,
        "analyst_outputs": {},
    }

    graph = build_graph()
    result = graph.invoke(state)

    print("✅ Morning Briefing Generated:\n", result["briefing"])

    briefing = result["briefing"]
    html = markdown_to_html(briefing)
    send_daily_email(html)

    return {"statusCode": 200, "briefing": result["briefing"]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple, clean, and callable from cron, Lambda, or a notebook.&lt;/p&gt;

&lt;h4&gt;
  
  
  4.7: Sending the Email
&lt;/h4&gt;

&lt;p&gt;Once the Composer Agent builds the final message, we need to send it as an email. Here’s the full implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

from market_briefing.config import EMAIL_PASS, EMAIL_SUBJECT, EMAIL_TO, EMAIL_USER

def send_daily_email(html_content: str):
    # Create MIME message
    msg = MIMEMultipart("alternative")
    msg["From"] = EMAIL_USER
    msg["To"] = EMAIL_TO
    msg["Subject"] = EMAIL_SUBJECT

    # Attach HTML content
    msg.attach(MIMEText(html_content, "html"))

    # Send email
    with smtplib.SMTP_SSL("smtp.gmail.com", 465) as server:
        server.login(EMAIL_USER, EMAIL_PASS)
        server.send_message(msg)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This uses standard smtplib + MIMEText to format and send HTML emails via Gmail.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;📫&lt;/em&gt; Pro Tip: &lt;em&gt;You can easily swap in any other provider — Mailgun, SendGrid, Outlook — or even push to Slack or Telegram.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.8: Going Serverless with AWS Lambda
&lt;/h4&gt;

&lt;p&gt;Want your system to run every morning, even while you sleep? Here’s how I deployed it with the &lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: market-briefing

plugins:
  - serverless-python-requirements

provider:
  name: aws
  runtime: python3.11
  region: us-east-1
  timeout: 60
  memorySize: 512
  environment:
    PYTHONPATH: /var/task/
    OPEN_AI_API_KEY: ${env:OPEN_AI_API_KEY}
    FINLIGHT_API_KEY: ${env:FINLIGHT_API_KEY}
    EMAIL_USER: ${env:EMAIL_USER}
    EMAIL_PASS: ${env:EMAIL_PASS}
    EMAIL_TO: ${env:EMAIL_TO}
    EMAIL_SUBJECT: ${env:EMAIL_SUBJECT}
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
          Resource: "*"

functions:
  briefer:
    handler: market_briefing/handler.main
    events:
      - schedule:
          rate: cron(0 7 * * ? *) # every day at 07:00 UTC
          enabled: true

package:
  patterns:
    - '!**'
    - market_briefing/**

custom:
  pythonRequirements:
    dockerizePip: true
    slim: false
    strip: false
    fileName: requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy your Python app as a Lambda function&lt;/li&gt;
&lt;li&gt;Schedule it to run daily (cron-style)&lt;/li&gt;
&lt;li&gt;Keep your environment variables safe with .env or AWS Secrets Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;☁️&lt;/em&gt; Tip: &lt;em&gt;If you’re using Ollama or local models, you can skip this and run it via a cron job on your own machine.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Bonus: The Config File
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from dotenv import load_dotenv
import os

load_dotenv()

OPEN_AI_API_KEY = os.environ["OPEN_AI_API_KEY"]
FINLIGHT_API_KEY = os.environ["FINLIGHT_API_KEY"]
EMAIL_USER = os.getenv("EMAIL_USER")
EMAIL_PASS = os.getenv("EMAIL_PASS")
EMAIL_TO = os.getenv("EMAIL_TO")
EMAIL_SUBJECT = os.getenv("EMAIL_SUBJECT", "Daily Email")

SUBJECTS = ["Trump tariffs", "Nvidia", "China"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can easily:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switch out topics&lt;/li&gt;
&lt;li&gt;Plug in new LLM providers&lt;/li&gt;
&lt;li&gt;Redirect output to different email addresses&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Recap: How It All Comes Together
&lt;/h4&gt;

&lt;p&gt;Let’s tie it all together:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You define &lt;strong&gt;what you care about&lt;/strong&gt; (in SUBJECTS)&lt;/li&gt;
&lt;li&gt;Each subject is processed by the &lt;strong&gt;Analyst Agent&lt;/strong&gt; , which pulls news, summarizes it, and scores sentiment&lt;/li&gt;
&lt;li&gt;Results are stored in a shared  &lt;strong&gt;state&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;When all subjects are processed, the &lt;strong&gt;Composer Agent&lt;/strong&gt; assembles a clean, skimmable morning email&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Email Sender&lt;/strong&gt; delivers it to your inbox&lt;/li&gt;
&lt;li&gt;All of this runs daily via cron or Lambda&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It’s modular, clean, and surprisingly easy to extend.&lt;/p&gt;

&lt;h3&gt;
  
  
  🚀 What’s Next?
&lt;/h3&gt;

&lt;p&gt;Right now, this works great with a static list of subjects. But there’s so much potential to go further.&lt;/p&gt;

&lt;p&gt;Here’s what I’m exploring next:&lt;/p&gt;

&lt;h4&gt;
  
  
  Smart Topic Detection
&lt;/h4&gt;

&lt;p&gt;Instead of passing predefined subjects, let an &lt;strong&gt;LLM scan the entire news feed&lt;/strong&gt; , detect key themes, and generate briefings dynamically.&lt;/p&gt;

&lt;h4&gt;
  
  
  Portfolio-Aware Briefings
&lt;/h4&gt;

&lt;p&gt;Pull in your portfolio holdings or watchlist, and prioritize news that impacts &lt;em&gt;your&lt;/em&gt; assets.&lt;/p&gt;

&lt;h4&gt;
  
  
  Local Execution
&lt;/h4&gt;

&lt;p&gt;Swap OpenAI for &lt;strong&gt;Mistral&lt;/strong&gt; , &lt;strong&gt;Mixtral&lt;/strong&gt; , or &lt;strong&gt;Gemma&lt;/strong&gt; running locally via &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;, for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full data privacy&lt;/li&gt;
&lt;li&gt;No API costs&lt;/li&gt;
&lt;li&gt;Offline support&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  More Data Sources
&lt;/h4&gt;

&lt;p&gt;Add financial calendars, earnings transcripts, Twitter trends, Reddit sentiment, or macro indicators.&lt;/p&gt;

&lt;h3&gt;
  
  
  💻 &lt;strong&gt;Want the full code?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I’m open-sourcing the entire repo. Just drop a comment if you want early access to the GitHub link. You can run it locally or in the cloud — whatever fits your setup.&lt;/p&gt;

</description>
      <category>news</category>
      <category>finance</category>
      <category>ai</category>
      <category>gpt</category>
    </item>
  </channel>
</rss>
