<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Leanid Herasimau</title>
    <description>The latest articles on Forem by Leanid Herasimau (@herasimau).</description>
    <link>https://forem.com/herasimau</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/herasimau"/>
    <language>en</language>
    <item>
      <title>Building Processes in Discovery in the Team</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Wed, 15 Oct 2025 20:49:47 +0000</pubDate>
      <link>https://forem.com/herasimau/building-processes-in-discovery-in-the-team-24ni</link>
      <guid>https://forem.com/herasimau/building-processes-in-discovery-in-the-team-24ni</guid>
      <description>&lt;p&gt;This guide provides a comprehensive checklist for establishing the essential processes for a new Product Discovery team. Discovery and Delivery teams can operate separately, but their workflows must be tightly integrated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7o9hp12d85t4uziw2swe.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7o9hp12d85t4uziw2swe.webp" alt="Image" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A typical Discovery team includes product leads, managers, analysts, and UX designers. For technical tasks, team leads and architects join the effort. The team's primary goal is to validate and refine initiatives before they land in the product backlog for the Delivery team to implement.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Discovery&lt;/strong&gt; answers the questions, 'What should we build?' and 'Should we build it at all?'&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Delivery&lt;/strong&gt; answers, 'How should we build it?'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Discovery Process Checklist
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Team Setup and Roles
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Appoint a Discovery Master:&lt;/strong&gt; This role, which can be filled by a Product Manager, Analyst, or Designer, leads the discovery process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensure Agile Proficiency:&lt;/strong&gt; The Discovery Master should be well-versed in Agile principles by reading the &lt;a href="https://scrumguides.org/scrum-guide.html" rel="noopener noreferrer"&gt;Scrum Guide&lt;/a&gt;, completing foundational training, and passing the &lt;a href="https://www.scrum.org/open-assessments/scrum-open" rel="noopener noreferrer"&gt;Scrum Open assessment&lt;/a&gt; with a score of 85% or higher.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Sprint Framework &amp;amp; Ceremonies
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Structure Work in Sprints:&lt;/strong&gt; Organize work into 1 or 2-week sprints to validate and refine initiatives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define a Sprint Goal:&lt;/strong&gt; Start each sprint by defining a clear goal. Prioritize tasks that contribute to this goal first, and don't take on more work than the team's average velocity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedicate Time for Research:&lt;/strong&gt; Allocate at least 30% of each sprint to clarifying problem and solution hypotheses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run Key Meetings:&lt;/strong&gt; Conduct regular Sprint Planning, Daily Standups, Sprint Reviews (Demos), and Retrospectives to maintain alignment and continuous improvement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Definitions and Criteria
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Definition of Ready (DoR):&lt;/strong&gt; Establish clear criteria for when a task is ready to be taken into a sprint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Definition of Done (DoD):&lt;/strong&gt; Define and adhere to strict quality criteria for completed initiatives. This DoD must be synchronized with the Delivery team's DoR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acceptance Criteria:&lt;/strong&gt; Every task in your project management tool must have specific acceptance criteria or a clear description of the expected outcome.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backlog and Roadmap Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prioritize with a Scoring Model:&lt;/strong&gt; Use a framework like RICE to assign a confidence level to each backlog item, iteratively validating and adjusting its priority.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain a Forward-Looking Roadmap:&lt;/strong&gt; Keep a Discovery roadmap planned for at least two sprints ahead to ensure the Delivery team has a steady stream of validated work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep Organized Backlogs:&lt;/strong&gt; Manage the Discovery backlog (problems and solutions), the sprint backlog, and a separate backlog for retrospective action items in your project management tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Customer-Centric Approach
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Utilize All Feedback Channels:&lt;/strong&gt; The team must be proficient in gathering insights from all customer channels, including support, sales, app store reviews, and UX research.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralize Insights:&lt;/strong&gt; Create a single repository for storing all hypotheses, problems, and customer requests to identify recurring themes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prioritize the Voice of the Customer:&lt;/strong&gt; This should be a guiding principle both before and after a product launch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assign a VoC Champion:&lt;/strong&gt; Designate a rotating role within the team each sprint to be responsible for analyzing customer feedback.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Understanding Discovery Sprints
&lt;/h2&gt;

&lt;p&gt;Discovery sprints run in parallel with development sprints, aiming to produce well-researched, validated initiatives for the product backlog. They follow standard Scrum events but stay at least two sprints ahead of development to prevent bottlenecks. The process is rooted in Design Thinking: moving from a symptom to a spectrum of problems, and then to a spectrum of solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problems Solved by This Approach
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fills the gap in the Scrum guide on how validated items get into the product backlog.&lt;/li&gt;
&lt;li&gt;Ensures that only &lt;em&gt;justified&lt;/em&gt; initiatives make it into the backlog.&lt;/li&gt;
&lt;li&gt;Involves developers early, reducing the risk of poorly defined tasks.&lt;/li&gt;
&lt;li&gt;Keeps designers, analysts, and product managers fully engaged in meaningful work.&lt;/li&gt;
&lt;li&gt;Promotes systematic evaluation of alternative solutions instead of settling for the first idea.&lt;/li&gt;
&lt;li&gt;Feeds the backlog with high-priority items regularly, not just what's available.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Potential Side Effects and Solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; Developers feel disconnected. &lt;strong&gt;Solution:&lt;/strong&gt; Maintain an open calendar of Discovery events and invite developers to participate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; Lack of transparency between teams. &lt;strong&gt;Solution:&lt;/strong&gt; Align schedules. Hold Delivery demos and stand-ups just before the Discovery team's corresponding events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; Development is waiting for tasks. &lt;strong&gt;Solution:&lt;/strong&gt; The Discovery team must stay 1-2 sprints ahead of the Delivery team.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Discovery Definition of Done (DoD)
&lt;/h2&gt;

&lt;p&gt;The output of a discovery sprint is either a validated &lt;strong&gt;problem&lt;/strong&gt; (to be analyzed further) or a validated &lt;strong&gt;solution&lt;/strong&gt; (ready for the delivery team). Therefore, the DoD has two parts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhexavjj2bp335y5a9gr3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhexavjj2bp335y5a9gr3.webp" alt="Image" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  DoD for a Problem
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;What problem are we solving?&lt;/li&gt;
&lt;li&gt;What is the root cause of the problem?&lt;/li&gt;
&lt;li&gt;How do we know this problem exists (data, feedback)?&lt;/li&gt;
&lt;li&gt;Who experiences this problem, and how many of them are there?&lt;/li&gt;
&lt;li&gt;How critical is the problem for them (blocker, annoyance)?&lt;/li&gt;
&lt;li&gt;How will we measure success if we solve it?&lt;/li&gt;
&lt;li&gt;Which other teams or product areas are affected?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  DoD for a Solution
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;What is the proposed solution?&lt;/li&gt;
&lt;li&gt;What alternative solutions have been considered (including doing nothing)?&lt;/li&gt;
&lt;li&gt;Why is this the best solution for the user and the business?&lt;/li&gt;
&lt;li&gt;How will we test that the solution works (e.g., A/B test, user testing)?&lt;/li&gt;
&lt;li&gt;What are the success and failure metrics for the test?&lt;/li&gt;
&lt;li&gt;How does this solution fit into the overall user journey?&lt;/li&gt;
&lt;li&gt;Is this an MVP or the final version? What does the ideal state look like?&lt;/li&gt;
&lt;li&gt;How will we inform users about this change?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Useful Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.jpattonassociates.com/services/training/product-discovery-immersion/" rel="noopener noreferrer"&gt;Jeff Patton on Product Discovery&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond" rel="noopener noreferrer"&gt;The Double Diamond Design Process&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://designsprintkit.withgoogle.com/" rel="noopener noreferrer"&gt;The Official Guide to Google Design Sprints&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://scrumguides.org/scrum-guide.html" rel="noopener noreferrer"&gt;The Official Scrum Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.scrum.org/open-assessments/scrum-open" rel="noopener noreferrer"&gt;Scrum Open Assessment&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/agile" rel="noopener noreferrer"&gt;#agile&lt;/a&gt;&lt;a href="https://suddo.io/tag/#scrum" rel="noopener noreferrer"&gt;#scrum&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agile</category>
      <category>management</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Anduril Released the EagleEye Helmet That Allows Seeing Through Walls</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Wed, 15 Oct 2025 11:57:22 +0000</pubDate>
      <link>https://forem.com/herasimau/anduril-released-the-eagleeye-helmet-that-allows-seeing-through-walls-2b4b</link>
      <guid>https://forem.com/herasimau/anduril-released-the-eagleeye-helmet-that-allows-seeing-through-walls-2b4b</guid>
      <description>&lt;p&gt;Defense technology company Anduril has &lt;a href="https://www.anduril.com/article/anduril-s-eagleeye-puts-mission-command-and-ai-directly-into-the-warfighter-s-helmet/" rel="noopener noreferrer"&gt;announced&lt;/a&gt; EagleEye, a groundbreaking AI-powered mixed reality helmet. Developed in partnership with Meta, this device is designed to give soldiers tactical advantages, including the ability to see through walls using a network of sensors, drones, and cameras.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyiioaz7fs2w9dzs8tbn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyiioaz7fs2w9dzs8tbn.webp" alt="Image" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Capabilities
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Heads-up display for mission instructions and data overlays.&lt;/li&gt;
&lt;li&gt;Spatial audio and radio frequency (RF) detection for heightened awareness.&lt;/li&gt;
&lt;li&gt;Seamless control of drones and other robotic military systems.&lt;/li&gt;
&lt;li&gt;Real-time mission rehearsal and coordination in a 3D environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F793264qo6f03hq23psep.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F793264qo6f03hq23psep.webp" alt="Image" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Battlefield Integration
&lt;/h3&gt;

&lt;p&gt;The helmet features both a transparent day mode and a digital night vision mode, along with precise tracking of teammates' locations. By connecting to Anduril's Lattice sensor network, it aggregates real-time data from across the battlefield, enabling soldiers to detect and track threats even when their direct line of sight is obstructed by terrain or buildings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Protection and Awareness
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Full ballistic protection and blast wave suppression.&lt;/li&gt;
&lt;li&gt;Expanded field of view with integrated rear and side-view sensors.&lt;/li&gt;
&lt;li&gt;Advanced threat detection to alert operators of hidden dangers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8frq1it2e8p1durhasy0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8frq1it2e8p1durhasy0.webp" alt="Image" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We don't want to give warfighters a new tool—we are giving them a new teammate. The idea of an AI partner integrated into your display has been a concept for decades. EagleEye is the first time it has become a reality.&lt;/p&gt;

&lt;p&gt;Anduril&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  A Strategic Partnership
&lt;/h3&gt;

&lt;p&gt;This collaboration builds on Anduril's existing work, which includes producing border control systems and drones. The company already provides software for the army's current mixed-reality goggles based on Microsoft HoloLens hardware. In February, Anduril and Microsoft announced an expanded partnership, with Anduril taking over the manufacturing and future development of the HoloLens 2-based military program.&lt;/p&gt;

&lt;p&gt;The partnership is further strengthened by Meta's increasing involvement in defense. This summer, Meta CTO Andrew Bosworth was one of four tech executives who became lieutenant colonels in the U.S. Army Reserve, leading a new unit called Detachment 201. Together, Anduril and Meta are now competing for a contract to create the next generation of army displays.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt;&lt;a href="https://suddo.io/tag/#anduril" rel="noopener noreferrer"&gt;#anduril&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Ghost 6.0: Distributed Publishing, Built-in Analytics, and $100 Million for Independent Authors</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Tue, 14 Oct 2025 19:43:35 +0000</pubDate>
      <link>https://forem.com/herasimau/ghost-60-distributed-publishing-built-in-analytics-and-100-million-for-independent-authors-538</link>
      <guid>https://forem.com/herasimau/ghost-60-distributed-publishing-built-in-analytics-and-100-million-for-independent-authors-538</guid>
      <description>&lt;p&gt;👻 &lt;strong&gt;Ghost&lt;/strong&gt; &lt;a href="https://ghost.org/" rel="noopener noreferrer"&gt;is a popular open-source platform for blogging&lt;/a&gt;, newsletters, and media projects, launched in 2013 as an alternative to WordPress. It focuses on &lt;strong&gt;content-centric publishing free from advertising&lt;/strong&gt; , featuring a minimalist editor, subscription support, and paid content options. The platform is fully open-source for self-hosting, while its commercial version, &lt;strong&gt;Ghost(Pro)&lt;/strong&gt;, provides managed cloud hosting and monetization tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkvvg3u7oeo4dtlx5cv9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkvvg3u7oeo4dtlx5cv9.webp" alt="Image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Think of Ghost as a CMS with the philosophy of Substack. You're not just building a website; you're creating an independent media outlet where you own everything—from the database to your audience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🚀 What's New in Ghost 6.0
&lt;/h2&gt;

&lt;p&gt;The developers are calling this the "biggest release in the platform's history," introducing major features like &lt;strong&gt;networked publishing&lt;/strong&gt; and &lt;strong&gt;native analytics&lt;/strong&gt; , alongside a modernized infrastructure and numerous enhancements for creators and developers.&lt;/p&gt;




&lt;h3&gt;
  
  
  🌐 Networked Publishing: Joining the Open Social Web
&lt;/h3&gt;

&lt;p&gt;Ghost 6.0 integrates with &lt;strong&gt;decentralized social networks&lt;/strong&gt; using the &lt;strong&gt;ActivityPub&lt;/strong&gt; protocol. This means readers can now &lt;strong&gt;discover, like, comment on, and follow&lt;/strong&gt; your posts directly from platforms like &lt;strong&gt;Mastodon, Threads, WordPress, Flipboard,&lt;/strong&gt; and other compatible services.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📣 Essentially, every Ghost publication now acts as a 'social profile' visible across the entire federated web.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This marks a return to the spirit of the &lt;strong&gt;classic blogosphere&lt;/strong&gt; , but with modern tools. It fosters direct interaction between creators and readers, free from algorithms, censorship, or hidden ranking.&lt;/p&gt;

&lt;h3&gt;
  
  
  📊 Native Analytics: No Third-Party Scripts Needed
&lt;/h3&gt;

&lt;p&gt;Previously, creators had to rely on third-party solutions. Ghost now introduces its own &lt;strong&gt;native analytics platform&lt;/strong&gt; that operates &lt;strong&gt;without cookies or external trackers&lt;/strong&gt; , providing insights on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Page views and traffic sources&lt;/li&gt;
&lt;li&gt;Engagement metrics for different audience segments (visitors, free, and paid)&lt;/li&gt;
&lt;li&gt;Real-time performance of newsletters and posts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The backend is powered by &lt;strong&gt;ClickHouse&lt;/strong&gt; through a partnership with &lt;strong&gt;Tinybird&lt;/strong&gt;. For self-hosted users, a free integration with the Tinybird API is available, while on Ghost(Pro), analytics is enabled by default.&lt;/p&gt;

&lt;h3&gt;
  
  
  💸 A $100 Million Milestone for Independent Creators
&lt;/h3&gt;

&lt;p&gt;Ghost co-founder John O'Nolan announced that creators on the platform have collectively earned over &lt;strong&gt;$100 million&lt;/strong&gt;. What began as a sustainable alternative to ad-based media has grown into a thriving ecosystem of independent publishers. Ghost remains managed by a &lt;strong&gt;non-profit foundation&lt;/strong&gt; , ensuring all revenue is reinvested into the open-source platform.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💬 "We're proving that independent media isn't just surviving—it's thriving."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  ⚙️ Additional User Experience Upgrades
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Custom Newsletters:&lt;/strong&gt; Design custom layouts, preview emails, and even edit them after sending, with built-in anti-spam protection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expanded Localization:&lt;/strong&gt; The interface now supports 60 languages with automatic translation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native Comments:&lt;/strong&gt; Engage your community with built-in comments featuring moderation, sorting, and threaded replies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ghost Explore:&lt;/strong&gt; A new discovery platform to showcase the best independent publications on Ghost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Payments:&lt;/strong&gt; Accept payments in 135 currencies with support for Apple Pay, Google Pay, and one-time donations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New Themes:&lt;/strong&gt; Fresh designs tailored for news (Source), personal blogs (Solo), and podcasts (Episode).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💻 Developer-Focused Changes
&lt;/h3&gt;

&lt;p&gt;Ghost 6.0 officially transitions to &lt;strong&gt;Docker Compose for installation&lt;/strong&gt;. This major update simplifies deploying a multi-service infrastructure (main server, analytics, social gateway) for self-hosted users.&lt;/p&gt;

&lt;p&gt;Additional updates include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;New Official Stack:&lt;/strong&gt; Ubuntu 24, Node.js 22, and MySQL 8.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code Extension:&lt;/strong&gt; A new extension provides syntax highlighting and live previews.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google AMP Deprecated:&lt;/strong&gt; Support for Google AMP has been completely removed (RIP 🪦).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme Compatibility:&lt;/strong&gt; The &lt;code&gt;gscan&lt;/code&gt; tool now checks themes for compatibility with version 6.0.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Bigger Picture: A Decentralized Future
&lt;/h2&gt;

&lt;p&gt;Ghost 6.0 is more than just an update; it's a significant step toward a &lt;strong&gt;decentralized and sustainable internet&lt;/strong&gt; where creators, not platforms, own their content. It signals a return to the open blogosphere, rebuilt on a modern tech stack with federated networks, native analytics, and a flexible architecture.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;After 10 years of development and $100 million earned by creators, it's clear: this model works.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/ghost" rel="noopener noreferrer"&gt;#ghost&lt;/a&gt;&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>writing</category>
      <category>news</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Wrote an HTTP Server in Assembly (and It Actually Works)</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Sat, 04 Oct 2025 09:35:18 +0000</pubDate>
      <link>https://forem.com/herasimau/i-wrote-an-http-server-in-assembly-and-it-actually-works-2icb</link>
      <guid>https://forem.com/herasimau/i-wrote-an-http-server-in-assembly-and-it-actually-works-2icb</guid>
      <description>&lt;p&gt;Today, we're doing just that. This is a wild, impractical, and incredibly fun experiment. Our mission: to build a functional HTTP server that serves a simple HTML page, using nothing but pure &lt;strong&gt;ARM64 assembly language&lt;/strong&gt; on an Apple Silicon Mac. This isn't about replacing Nginx; it's about peeling back the layers of magic to understand what a web server &lt;em&gt;really&lt;/em&gt; is.&lt;/p&gt;

&lt;h3&gt;
  
  
  But... Why Would Anyone Do This?
&lt;/h3&gt;

&lt;p&gt;Let's be clear: you should not write your company's next microservice in assembly. This is an exercise in pure, unadulterated learning and curiosity. We're doing this to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Demystify Networking:&lt;/strong&gt; See firsthand that a web server is just a loop: accept a connection, write some text, close the connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understand System Calls:&lt;/strong&gt; Learn how a program asks the operating system to do things like open network ports and handle files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Appreciate Abstractions:&lt;/strong&gt; After this, you'll have a newfound respect for the frameworks that handle all this complexity for you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Have Fun:&lt;/strong&gt; Embrace the hacker spirit of getting as close to the metal as possible!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: The Blueprint - Setup and Constants
&lt;/h2&gt;

&lt;p&gt;Every assembly program starts with some boilerplate. We need to declare our main function and tell the assembler which system functions we plan to use. On macOS, we don't use raw syscalls directly; instead, we call functions from the system library (libSystem), which is a more stable and portable approach.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// http_server_arm64_macos.s
.text
.globl _main

// We'll be calling these C functions from the macOS system library
.extern _socket
.extern _setsockopt
.extern _bind
.extern _listen
.extern _accept
.extern _write
.extern _close
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we define some constants. These are just human-readable names for numbers that the socket API expects. You can find these values in the system headers on your Mac (e.g., in &lt;code&gt;/usr/include/sys/socket.h&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Constants (BSD/macOS values)
.equ AF_INET, 2 // Address Family: IPv4
.equ SOCK_STREAM, 1 // Socket Type: TCP
.equ SOL_SOCKET, 0xffff // Socket Level for setsockopt
.equ SO_REUSEADDR, 0x0004 // Allow reusing local addresses

.equ RESP_LEN, 172 // Total bytes in our HTTP response
.equ ADDR_LEN, 16 // sizeof(struct sockaddr_in)
.equ BACKLOG, 16 // Max pending connections for listen()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Opening a Line - Creating the Socket
&lt;/h2&gt;

&lt;p&gt;The first real action is to ask the OS for a socket. A socket is like a file handle, but for network communication. According to the &lt;a href="https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/syscall.2.html" rel="noopener noreferrer"&gt;ARM64 calling convention on macOS&lt;/a&gt;, the first few arguments to a function are passed in registers &lt;code&gt;x0&lt;/code&gt;, &lt;code&gt;x1&lt;/code&gt;, &lt;code&gt;x2&lt;/code&gt;, etc. The return value comes back in &lt;code&gt;x0&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_main:
    // socket(AF_INET, SOCK_STREAM, 0)
    mov x0, #AF_INET // Argument 1: Domain (IPv4)
    mov x1, #SOCK_STREAM // Argument 2: Type (TCP)
    mov x2, #0 // Argument 3: Protocol (0 for default)
    bl _socket // Branch and Link (call the function)

    // The new socket's file descriptor is now in x0.
    // We save it in x19, a 'callee-saved' register, so it won't be overwritten.
    mov x19, x0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Claiming Our Turf - Binding to a Port
&lt;/h2&gt;

&lt;p&gt;Now that we have a socket, we need to tell the OS, "Hey, I want this socket to listen on port 8585 for any incoming traffic." This is called &lt;strong&gt;binding&lt;/strong&gt;. To do this, we need to construct a special C struct in memory called &lt;code&gt;sockaddr_in&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here’s what that struct looks like in our data section. It's a precise sequence of bytes representing our desired address (0.0.0.0) and port (8585).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// This goes in the .data section at the end of the file
.data
.align 4

// sockaddr_in for 0.0.0.0:8585
addr:
    .byte 16 // sin_len (16 bytes total)
    .byte AF_INET // sin_family (IPv4)
    .hword 0x8921 // sin_port (8585 in network byte order)
    .word 0 // sin_addr (0.0.0.0 means INADDR_ANY)
    .quad 0 // sin_zero[8] (padding)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Notice the port: 8585 is 0x2189 in hex. We write it as 0x8921 because networks use 'big-endian' byte order, while our M1 Mac is 'little-endian'. We have to pre-swap the bytes!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With the struct defined, we can now call &lt;code&gt;_bind&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // bind(server_fd, &amp;amp;addr, sizeof(addr))
    mov x0, x19 // Arg 1: Our socket fd
    adrp x1, addr@PAGE // Get the high part of the address of 'addr'
    add x1, x1, addr@PAGEOFF // Add the low part to get the full address
    mov x2, #ADDR_LEN // Arg 3: The size of the struct
    bl _bind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: The Server Loop - Listen, Accept, Write, Close
&lt;/h2&gt;

&lt;p&gt;This is the heart of any server. It's an infinite loop that waits for a client, serves them, and then waits for the next one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // listen(server_fd, BACKLOG)
    mov x0, x19
    mov x1, #BACKLOG
    bl _listen

// This label marks the start of our infinite loop
accept_loop:
    // accept(server_fd, NULL, NULL) -&amp;gt; This BLOCKS until a client connects!
    mov x0, x19
    mov x1, #0
    mov x2, #0
    bl _accept
    mov x20, x0 // Save the new client_fd in x20

    // write(client_fd, response, RESP_LEN)
    mov x0, x20 // Arg 1: The client's socket
    adrp x1, response@PAGE // Get the address of our HTML response
    add x1, x1, response@PAGEOFF
    mov x2, #RESP_LEN // Arg 3: How many bytes to write
    bl _write

    // close(client_fd)
    mov x0, x20
    bl _close

    b accept_loop // Unconditional branch: Go back and wait for another client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final piece is the actual HTTP response we're sending. It's just a block of ASCII text in our data section, with all the required headers and our simple HTML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Prebuilt HTTP/1.1 response (RESP_LEN must match total bytes here)
response:
    .ascii HTTP/1.1 200 OK\r\n
    .ascii Content-Type: text/html; charset=utf-8\r\n
    .ascii Content-Length: 74\r\n
    .ascii Connection: close\r\n
    .ascii \r\n
    .ascii &amp;lt;!doctype html&amp;gt;&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from Assembler :)&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;\n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Putting It All Together: The Complete File
&lt;/h2&gt;

&lt;p&gt;Here is the complete source code. Save it as &lt;code&gt;http_server_arm64_macos.s&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// http_server_arm64_macos.s
// Minimal HTTP server on macOS ARM64 (Apple Silicon)
// Listens on port 8585 and replies with a tiny HTML page.

        .text
        .globl _main

        .extern _socket
        .extern _setsockopt
        .extern _bind
        .extern _listen
        .extern _accept
        .extern _write
        .extern _close

// ------------------------------------------------------------
// Constants (BSD/macOS values)
// ------------------------------------------------------------
        .equ AF_INET, 2
        .equ SOCK_STREAM, 1
        .equ SOL_SOCKET, 0xffff
        .equ SO_REUSEADDR, 0x0004

        .equ RESP_LEN, 172
        .equ ADDR_LEN, 16
        .equ BACKLOG, 16

// ------------------------------------------------------------
// main()
// ------------------------------------------------------------
_main:
        // socket(AF_INET, SOCK_STREAM, 0)
        mov x0, #AF_INET
        mov x1, #SOCK_STREAM
        mov x2, #0
        bl _socket
        mov x19, x0 // preserve server fd

        // setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &amp;amp;one, 4)
        mov x0, x19
        movz x1, #SOL_SOCKET
        mov x2, #SO_REUSEADDR
        adrp x3, one@PAGE
        add x3, x3, one@PAGEOFF
        mov x4, #4
        bl _setsockopt

        // bind(server_fd, &amp;amp;addr, sizeof(addr))
        mov x0, x19
        adrp x1, addr@PAGE
        add x1, x1, addr@PAGEOFF
        mov x2, #ADDR_LEN
        bl _bind

        // listen(server_fd, BACKLOG)
        mov x0, x19
        mov x1, #BACKLOG
        bl _listen

// Accept-Write-Close loop
accept_loop:
        // accept(server_fd, NULL, NULL)
        mov x0, x19
        mov x1, #0
        mov x2, #0
        bl _accept
        mov x20, x0 // client_fd

        // write(client_fd, response, RESP_LEN)
        mov x0, x20
        adrp x1, response@PAGE
        add x1, x1, response@PAGEOFF
        mov x2, #RESP_LEN
        bl _write

        // close(client_fd)
        mov x0, x20
        bl _close

        b accept_loop // handle next connection forever

// ------------------------------------------------------------
// Data
// ------------------------------------------------------------
        .data
        .align 4

// sockaddr_in for 0.0.0.0:8585
addr:
        .byte 16 // sin_len
        .byte AF_INET // sin_family
        .hword 0x8921 // sin_port (8585 in network byte order)
        .word 0 // sin_addr (0.0.0.0)
        .quad 0 // sin_zero[8]

// setsockopt value 1
one:
        .word 1

// Prebuilt HTTP/1.1 response
response:
        .ascii HTTP/1.1 200 OK\r\n
        .ascii Content-Type: text/html; charset=utf-8\r\n
        .ascii Content-Length: 74\r\n
        .ascii Connection: close\r\n
        .ascii \r\n
        .ascii &amp;lt;!doctype html&amp;gt;&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from Assembler :)&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;\n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Build and Run Your Creation
&lt;/h2&gt;

&lt;p&gt;Open your terminal, navigate to where you saved the file, and run these commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Build the Executable
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clang -o http_asm http_server_arm64_macos.s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command tells &lt;code&gt;clang&lt;/code&gt; to assemble our &lt;code&gt;.s&lt;/code&gt; file and link it against the necessary system libraries, creating an executable named &lt;code&gt;http_asm&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Run the Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./http_asm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your terminal will now hang—that's a good thing! It's blocked on the &lt;code&gt;accept&lt;/code&gt; call, waiting for a connection. macOS might pop up a security prompt asking to allow incoming network connections; you'll need to approve it.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Test It!
&lt;/h3&gt;

&lt;p&gt;Open a &lt;em&gt;new&lt;/em&gt; terminal window and use &lt;code&gt;curl&lt;/code&gt; to connect to your server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost:8585
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the glorious output: &lt;code&gt;&amp;lt;!doctype html&amp;gt;&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from Assembler :)&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;/code&gt;. You did it! You served a web page with pure assembly. To stop the server, go back to its terminal and press &lt;code&gt;Ctrl+C&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febks0otaxcv6gx9xhbqd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febks0otaxcv6gx9xhbqd.webp" alt="Image" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Welcome Back to Reality
&lt;/h2&gt;

&lt;p&gt;We've journeyed to the lowest levels of software to build something we use every day. While you won't be deploying this to production, you now have a much deeper appreciation for what's happening when you type &lt;code&gt;app.listen(8585)&lt;/code&gt; in your favorite framework. You've seen the sockets, the binding, the endless loop—the fundamental mechanics of the internet, written in the language of the machine itself. For more details on the instructions, check out the &lt;a href="https://developer.arm.com/documentation/102374/0101/Introduction" rel="noopener noreferrer"&gt;ARMv8-A Instruction Set Architecture&lt;/a&gt; documentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/arm" rel="noopener noreferrer"&gt;#arm&lt;/a&gt; &lt;a href="https://suddo.io/tag/assembler" rel="noopener noreferrer"&gt;#assembler&lt;/a&gt; &lt;a href="https://suddo.io/tag/fun" rel="noopener noreferrer"&gt;#fun&lt;/a&gt; &lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>computerscience</category>
      <category>programming</category>
      <category>showdev</category>
    </item>
    <item>
      <title>A quantum computer ran for 2 hours, and the reason why is a total game-changer</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Fri, 03 Oct 2025 14:52:14 +0000</pubDate>
      <link>https://forem.com/herasimau/a-quantum-computer-ran-for-2-hours-and-the-reason-why-is-a-total-game-changer-2eg4</link>
      <guid>https://forem.com/herasimau/a-quantum-computer-ran-for-2-hours-and-the-reason-why-is-a-total-game-changer-2eg4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54jwufbpf1ln96ap34jq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54jwufbpf1ln96ap34jq.webp" alt="Image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A group of physicists from Harvard and MIT just built a quantum computer that ran continuously for &lt;strong&gt;more than two hours&lt;/strong&gt;. Although it doesn’t sound like much versus regular computers (like servers that run 24/7 for months, if not years), this is a huge breakthrough in quantum computing. As reported by &lt;a href="https://www.thecrimson.com/article/2025/10/2/quantum-computing-breakthrough/" rel="noopener noreferrer"&gt;The Harvard Crimson&lt;/a&gt;, most current quantum computers run for only a few milliseconds, with record-breaking machines only able to operate for a little over 10 seconds.&lt;/p&gt;

&lt;p&gt;Although two hours is still a bit limited, researchers say that the concept behind this could allow future quantum computers to run for much longer, maybe even indefinitely. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There is still a way to go and scale from where we are now, but the roadmap is now clear based on the breakthrough experiments that we’ve done here at Harvard.&lt;/p&gt;

&lt;p&gt;Tout T. Wang, Research Associate&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  The Challenge of Qubits
&lt;/h3&gt;

&lt;p&gt;The main difference between “regular” and quantum computing is that the latter uses &lt;code&gt;qubits&lt;/code&gt;, which are subatomic particles, to hold and process data. But unlike the former, which retain information even without power, quantum computers can lose these qubits in a process called &lt;code&gt;“atom loss”&lt;/code&gt;. This results in information loss and eventually system failure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://www.thecrimson.com/article/2025/10/2/quantum-computing-breakthrough/" rel="noopener noreferrer"&gt;The Harvard Crimson&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt; &lt;a href="https://suddo.io/tag/quantum" rel="noopener noreferrer"&gt;#quantum&lt;/a&gt; &lt;a href="https://suddo.io/tag/harvard" rel="noopener noreferrer"&gt;#harvard&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Data Is More Valuable Than Money: JetBrains Changes Licenses on Code From Real Projects</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Fri, 03 Oct 2025 13:47:37 +0000</pubDate>
      <link>https://forem.com/herasimau/data-is-more-valuable-than-money-jetbrains-changes-licenses-on-code-from-real-projects-ji4</link>
      <guid>https://forem.com/herasimau/data-is-more-valuable-than-money-jetbrains-changes-licenses-on-code-from-real-projects-ji4</guid>
      <description>&lt;p&gt;Major players are increasingly understanding that the gold of the 21st century is not oil or subscriptions, but data. And JetBrains demonstrates this particularly clearly. The company is willing to forgo quick profits and give away licenses for free — just to gain access to the unique 'fuel' for its AI models.&lt;/p&gt;

&lt;p&gt;Most LLMs are trained on public datasets that are far from real-world work scenarios. This leads to 'hallucinations' and errors on complex projects. JetBrains wants to fix this &lt;a href="https://blog.jetbrains.com/blog/2025/09/30/detailed-data-sharing-for-better-ai/" rel="noopener noreferrer"&gt;and collect real signals&lt;/a&gt; — code editing history, terminal commands, AI queries, and responses.&lt;/p&gt;

&lt;p&gt;What they came up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instead of almost $1000 for an annual corporate &lt;a href="https://www.jetbrains.com/lp/data-collection-program-for-organizations/" rel="noopener noreferrer"&gt;All Products Pack subscription&lt;/a&gt; (access to all IDEs), companies can get it for free.&lt;/li&gt;
&lt;li&gt;The price: allow JetBrains to collect work data — code snippets, terminal commands, editing history, and AI queries.&lt;/li&gt;
&lt;li&gt;This data will be used to train JetBrains' own language models.&lt;/li&gt;
&lt;li&gt;The data collection also applies to academic and open-source licenses (with an option to opt out in the settings).&lt;/li&gt;
&lt;li&gt;JetBrains promises that data will be stored in compliance with GDPR, with no third-party access.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;In fact, JetBrains is giving away licenses for free today to have an advantage in the race for AI tools tomorrow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/jetbrains" rel="noopener noreferrer"&gt;#jetbrains&lt;/a&gt; &lt;a href="https://suddo.io/tag/ai" rel="noopener noreferrer"&gt;#ai&lt;/a&gt; &lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt;&lt;/p&gt;

</description>
      <category>news</category>
      <category>data</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Buying mattresses for office sleepovers: San Francisco AI startups are switching to the Chinese '996' schedule due to the...</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Fri, 03 Oct 2025 12:56:50 +0000</pubDate>
      <link>https://forem.com/herasimau/buying-mattresses-for-office-sleepovers-san-francisco-ai-startups-are-switching-to-the-chinese-4gok</link>
      <guid>https://forem.com/herasimau/buying-mattresses-for-office-sleepovers-san-francisco-ai-startups-are-switching-to-the-chinese-4gok</guid>
      <description>&lt;p&gt;This is a 9:00 AM to 9:00 PM schedule, six days a week, which the Supreme Court of China &lt;a href="https://www.reuters.com/world/china/chinese-authorities-say-overtime-996-policy-is-illegal-2021-08-27/" rel="noopener noreferrer"&gt;banned&lt;/a&gt; in 2021.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lv7bdjrnp2m7mmfh0m6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lv7bdjrnp2m7mmfh0m6.webp" alt="Employees of the insurance AI startup Corgi, who work 'seven days a week,' collectively bought eight new mattresses, pillows, and sheets for the office." width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Employees of the insurance AI startup Corgi, who work 'seven days a week,' collectively bought eight new mattresses, pillows, and sheets for the office.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Index Ventures partner Martin Mignot was one of the first to &lt;a href="https://www.linkedin.com/posts/martinmignot_forget-9-to-5-996-is-the-new-startup-standard-activity-7335258912662683649-s6R0/" rel="noopener noreferrer"&gt;notice&lt;/a&gt; in the spring of 2025 that the 72-hour work week had 'quietly become the norm' in Silicon Valley. The founders and executives he spoke with tell candidates upfront that they will have to work a '996' schedule, adding: 'If you have hobbies that will get in the way, you are not a good fit for us.'&lt;/li&gt;
&lt;li&gt;They explain that 'you can't let productivity drop at this pace' of neural network development, so as not to miss out on 'huge opportunities.'&lt;/li&gt;
&lt;li&gt;Job postings for startups integrating neural networks into their products specify a &lt;a href="https://www.builtinsf.com/job/associate-account-executive-aae/2602513" rel="noopener noreferrer"&gt;schedule&lt;/a&gt; from Monday to Saturday or irregular hours—&lt;a href="https://www.ycombinator.com/companies/corgi/jobs/YJZRDYx-full-stack-engineer-backend-heavy-metrics-obsessed-and-slightly-unhinged" rel="noopener noreferrer"&gt;every day&lt;/a&gt; if necessary. The company &lt;a href="https://www.ycombinator.com/companies/corgi" rel="noopener noreferrer"&gt;Corgi&lt;/a&gt;, which automates insurance services, &lt;a href="https://www.linkedin.com/posts/wuseokjung_everyone-at-corgi-insurance-yc-s24-gets-activity-7345853786310995970-bJXe?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAB2NrCEBBvHXL8Ma2NGXatZVi-msoF4EQqA" rel="noopener noreferrer"&gt;gifts&lt;/a&gt; new employees mattresses to sleep in the office. 'We work seven days a week and sometimes stay late,' writes its head of development, Josh Jung.&lt;/li&gt;
&lt;li&gt;LifeX Ventures partner and founder of another insurance startup, CoverWallet, Iñaki Berenguer &lt;a href="https://qz.com/silicon-valley-996-ai-startup-workers-weekends-china-san-francisco" rel="noopener noreferrer"&gt;told&lt;/a&gt; Quartz that when the company started in 2016, only directors and a 'small circle of senior employees' might work on Sundays. Now, in 'many' AI startups in San Francisco, Sunday is a mandatory day for meetings and strategy discussions 'for all employees.'&lt;/li&gt;
&lt;li&gt;Corporate bank card provider Ramp &lt;a href="https://ramp.com/velocity/san-francisco-tech-workers-996-schedule?utm_source=linkedin" rel="noopener noreferrer"&gt;reported&lt;/a&gt; a rise in transactions for delivery and takeout food on Saturdays from January to August 2025, compared to the same periods in 2024 and 2023. According to the company, the 'surge' indirectly confirms that employees have started working from offices six days a week. However, the provided charts show only a minor increase of a fraction of a percent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxiid7il9f9lngliqdys8.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxiid7il9f9lngliqdys8.webp" alt="Source: Ramp" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: Ramp&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Berenguer believes the reason is the 'hyper-competitive' environment where dozens of startups are developing 'almost identical products.' The only advantage becomes speed and, as a consequence, a grueling schedule.&lt;/li&gt;
&lt;li&gt;Meanwhile, easing employees' work with AI agents is expensive or technically difficult. As an MIT study from August 2025 showed, 95% of the companies in the sample failed to improve their financial results after implementing AI assistants.&lt;/li&gt;
&lt;li&gt;Another factor is the reduction of jobs for junior specialists. According to &lt;a href="https://www.indeed.com/career-advice/news/new-grads-shift-entry-level-job-expectations" rel="noopener noreferrer"&gt;data&lt;/a&gt; from the job search service Indeed for July 2025, the number of junior positions in the tech industry decreased by 36% from 2020 to 2025. Since 2024, the unemployment rate among young professionals (under 24) in this sector has risen by 7%.&lt;/li&gt;
&lt;li&gt;Due to the smaller number of available vacancies, employees may agree to an irregular workday, writes Quartz.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt; &lt;a href="https://suddo.io/tag/996" rel="noopener noreferrer"&gt;#996&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Get into ChatGPT, Perplexity, and Google AI Answers: A Practical Guide for GEO</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Fri, 03 Oct 2025 11:57:17 +0000</pubDate>
      <link>https://forem.com/herasimau/how-to-get-into-chatgpt-perplexity-and-google-ai-answers-a-practical-guide-for-geo-d1i</link>
      <guid>https://forem.com/herasimau/how-to-get-into-chatgpt-perplexity-and-google-ai-answers-a-practical-guide-for-geo-d1i</guid>
      <description>&lt;p&gt;Holding a top position in Google's organic search results used to be the ultimate goal for any business. It was a near-guarantee of visibility, clicks, and consistent traffic. &lt;strong&gt;That guarantee is now gone.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof8pbdh93jr6mjit5u4r.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof8pbdh93jr6mjit5u4r.webp" alt="Image" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rise of AI-powered search features, like Google's AI Overviews and conversational answers from platforms like ChatGPT and Perplexity, has fundamentally changed the search landscape. These systems don't just show a list of links; they synthesize information from multiple sources to provide a direct, comprehensive answer at the top of the page.&lt;/p&gt;

&lt;p&gt;As a result, even websites with flawless technical SEO, expert-written content, and top rankings are experiencing significant traffic drops. They are becoming invisible, bypassed by AI that answers user questions before a user ever has a chance to click on a traditional search result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rules of the game have changed radically,&lt;/strong&gt; and promotion strategies stuck in 2020 are no longer effective.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The number one position in organic search results loses 34.5% of clicks when an AI Overview is present above it.&lt;/p&gt;

&lt;p&gt;Ahrefs Independent Analysis&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think about that number. A site can be in first place, but &lt;strong&gt;a third of its potential traffic evaporates&lt;/strong&gt;. Simply because artificial intelligence answered the user's question before they could even get to the link.&lt;/p&gt;

&lt;p&gt;The situation is even more serious than it first appears. According to &lt;a href="https://www.searchenginejournal.com/impact-of-ai-overviews-how-publishers-need-to-adapt/556843/" rel="noopener noreferrer"&gt;a detailed analysis by Pew Research Center&lt;/a&gt;, users click on results only 8% of the time when an AI Overview is present, compared to 15% without it—a &lt;strong&gt;46.7% drop in click-through rate&lt;/strong&gt;. And &lt;a href="https://www.searchenginejournal.com/impact-of-ai-overviews-how-publishers-need-to-adapt/556843/" rel="noopener noreferrer"&gt;data from Similarweb&lt;/a&gt; records an even more alarming trend: the growth of so-called &lt;em&gt;zero-click searches&lt;/em&gt; from 56% to 69% between May 2024 and May 2025.&lt;/p&gt;

&lt;p&gt;This is happening &lt;em&gt;right now&lt;/em&gt;, as you read this article. Not in a year. Not sometime in the future. &lt;strong&gt;Today.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The question is no longer academic but critical for business survival: how do you get into these AI answers? How do you become the source that ChatGPT, Perplexity, Claude, and Google AI Overviews cite and recommend?&lt;/p&gt;

&lt;h3&gt;
  
  
  GEO: The New Optimization Discipline You Need to Know About
&lt;/h3&gt;

&lt;p&gt;A new term has emerged that will change the digital marketing industry in the coming years— &lt;strong&gt;GEO, or Generative Engine Optimization&lt;/strong&gt;. This is the practice of optimizing content specifically for generative artificial intelligence systems.&lt;/p&gt;

&lt;p&gt;Let's break down the fundamental difference. Classic SEO works with algorithms that rank web pages and show you a list of ten blue links. GEO works fundamentally differently: it optimizes content for &lt;a href="https://suddo.io" rel="noopener noreferrer"&gt;Large Language Models&lt;/a&gt; (LLMs), which don't just rank pages. They actively select three to five of the most authoritative sources, extract key information, synthesize it, and generate a &lt;em&gt;single, coherent answer&lt;/em&gt; directly in the search interface.&lt;/p&gt;

&lt;p&gt;The difference is not just technical—&lt;em&gt;it's fundamental.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Google shows a list and leaves the final choice to you. ChatGPT or Perplexity make the decision &lt;em&gt;for you&lt;/em&gt; based on their criteria of authoritativeness and relevance. They autonomously decide which three to five sites out of millions are reliable enough to be included. If you're not one of them, the user may never even know your site exists.&lt;/p&gt;

&lt;p&gt;Researchers from Princeton University, Georgia Tech, and others published &lt;a href="https://arxiv.org/abs/2311.09735" rel="noopener noreferrer"&gt;a foundational scientific paper titled "GEO: Generative Engine Optimization."&lt;/a&gt; Their rigorous analysis proved that &lt;strong&gt;proper GEO can increase content visibility in AI-generated answers by up to 40%.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The result of their work is unambiguous: only certain sites with proper optimization regularly appear in AI citations. The rest remain invisible to generative systems.&lt;/p&gt;

&lt;p&gt;This isn't because the content is low quality or the site is slow. It's because the content is not structured in a way that Large Language Models can efficiently process, analyze, and extract key information in the fractions of a second they have to work.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Large Language Models Choose Sources
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Speed of information extraction&lt;/strong&gt; is the first and most critical selection factor.&lt;/p&gt;

&lt;p&gt;LLMs operate in real-time with strict constraints. When you ask a question, the system can't leisurely study every potential source. It gets a list of relevant pages and must decide whether to include or exclude each one in &lt;em&gt;milliseconds&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Imagine two websites answering the same question:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Site A (The Slow Site):&lt;/strong&gt; Information is buried in a 2,000-word wall of text. Key facts like price and features are hidden in long paragraphs. Headings are vague. To get answers, the LLM must read and parse the entire thing, a slow and resource-intensive process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Site B (The Fast Site):&lt;/strong&gt; The page has a clear, logical structure with descriptive headings like Price and Plans and Technical Specifications. Most importantly, it uses structured data from &lt;a href="http://Schema.org" rel="noopener noreferrer"&gt;Schema.org&lt;/a&gt; in a &lt;code&gt;JSON-LD&lt;/code&gt; format. The LLM sees this block, instantly parses the data, and gets all the critical information in milliseconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which source will the system choose? The answer is obvious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured data from &lt;a href="http://Schema.org" rel="noopener noreferrer"&gt;Schema.org&lt;/a&gt; is a priority signal of source quality.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Schema Markup helps Microsoft's Large Language Models understand and interpret the content of web pages.&lt;/p&gt;

&lt;p&gt;Fabrice Canel, Principal Product Manager at Microsoft Bing&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn't just a small improvement. A &lt;a href="https://www.schemaapp.com/schema-markup/the-semantic-value-of-schema-markup-in-2025/" rel="noopener noreferrer"&gt;benchmark study by Data World&lt;/a&gt; shows that LLMs using structured data achieve an answer accuracy level &lt;strong&gt;300% higher&lt;/strong&gt; than models working only with unstructured text. That's a threefold superiority.&lt;/p&gt;

&lt;p&gt;This is why implementing &lt;a href="http://Schema.org" rel="noopener noreferrer"&gt;Schema.org&lt;/a&gt; is moving from the 'nice to have' category to 'critically necessary for survival'. Yet, according to &lt;a href="https://www.epicnotion.com/blog/faq-schema-in-2025/" rel="noopener noreferrer"&gt;available data&lt;/a&gt;, a colossal &lt;strong&gt;87.6% of all websites&lt;/strong&gt; on the internet ignore structured data. Each of these sites is losing potential visibility in AI search every single day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E-E-A-T signals of trust and authoritativeness in the age of AI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI systems are designed to avoid citing unreliable or misleading sources. That's why they rely on the concept of &lt;strong&gt;E-E-A-T&lt;/strong&gt; , which stands for &lt;strong&gt;E&lt;/strong&gt; xperience, &lt;strong&gt;E&lt;/strong&gt; xpertise, &lt;strong&gt;A&lt;/strong&gt; uthoritativeness, and &lt;strong&gt;T&lt;/strong&gt; rustworthiness.&lt;/p&gt;

&lt;p&gt;A site that clearly indicates its authors, their qualifications, and provides full company contact information gains a huge competitive advantage over anonymous content of unknown origin. It's a powerful signal that the information can be trusted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Seven Key Factors for Getting into AI Answers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/pdf/2311.09735" rel="noopener noreferrer"&gt;The detailed Princeton University study&lt;/a&gt; demonstrated that some GEO methods are far more effective than others. Here are seven factors that actually work.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive structured data from &lt;a href="http://Schema.org" rel="noopener noreferrer"&gt;Schema.org&lt;/a&gt;.&lt;/strong&gt; This is the language you use to communicate directly with AI. Use &lt;code&gt;Product Schema&lt;/code&gt; for products, &lt;code&gt;Service Schema&lt;/code&gt; for services, &lt;code&gt;Article Schema&lt;/code&gt; for articles, and &lt;code&gt;FAQPage Schema&lt;/code&gt; for Q&amp;amp;A sections. Correct implementation can lead to significant increases in visibility and click-through rates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Systematic citation of authoritative sources.&lt;/strong&gt; The Princeton study found that linking to authoritative studies, official documents, and recognized experts led to an impressive &lt;strong&gt;115.1% increase in visibility&lt;/strong&gt; for some sites. LLMs are programmed to trust content that backs up its claims.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structuring content in a question-and-answer format.&lt;/strong&gt; Adding FAQ sections with real user questions and concise answers significantly increases the likelihood of being cited. Start article sections with the specific questions your audience is asking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Saturating content with specific statistics.&lt;/strong&gt; LLMs love specific numbers and measurable data. Instead of many companies use this, write according to a 2024 Gartner study, 47% of B2B companies have implemented this. Be precise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Including direct quotes from recognized experts.&lt;/strong&gt; Adding quotes from verified industry experts, with their names and titles, signals that your content is based on expert opinion, not just a random retelling of information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explicitly demonstrating timeliness through dates.&lt;/strong&gt; Always indicate the publication and update dates of your content. Use the &lt;code&gt;datePublished&lt;/code&gt; and &lt;code&gt;dateModified&lt;/code&gt; fields in &lt;code&gt;Article Schema&lt;/code&gt;. AI systems prioritize fresh, current information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flawless technical accessibility and performance.&lt;/strong&gt; Core Web Vitals, page load speed, and mobile optimization all matter. A slow or buggy site might simply be skipped by an AI system due to a timeout.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step-by-Step GEO Implementation Checklist
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Week One: Audit and Prioritization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify your 20-30 most critical pages (based on traffic and conversions).&lt;/li&gt;
&lt;li&gt;Check if these pages appear in AI Overviews for your target queries. You can do this manually or with tools like &lt;a href="https://www.seoclarity.net/research/ai-overviews-impact" rel="noopener noreferrer"&gt;seoClarity&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Analyze your top competitors. What structured data are they using?&lt;/li&gt;
&lt;li&gt;Use the &lt;a href="https://search.google.com/test/rich-results" rel="noopener noreferrer"&gt;Google Rich Results Test&lt;/a&gt; and the &lt;a href="https://validator.schema.org" rel="noopener noreferrer"&gt;Schema Markup Validator&lt;/a&gt; to check for errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week Two: Critical Markup Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Focus on the highest-impact schema for your most important pages. For e-commerce, this is &lt;code&gt;Product Schema&lt;/code&gt;. For B2B, it's &lt;code&gt;Service Schema&lt;/code&gt;. For content, it's &lt;code&gt;Article Schema&lt;/code&gt; and &lt;code&gt;FAQ Schema&lt;/code&gt;. Use the &lt;code&gt;JSON-LD&lt;/code&gt; format exclusively—this is &lt;a href="https://www.searchenginejournal.com/technical-seo/schema/" rel="noopener noreferrer"&gt;Google's official recommendation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week Three: Strengthen E-E-A-T Signals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create detailed author biographies and mark them up with &lt;code&gt;Person Schema&lt;/code&gt;. Include photos, titles, experience, and links to professional profiles. In parallel, expand your &lt;code&gt;Organization Schema&lt;/code&gt; with your company's history, address, contact info, awards, and social media links.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week Four: Monitor and Scale&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Systematically track your pages' appearance in AI Overviews and check for mentions in ChatGPT, Perplexity, and Claude. If you see a positive trend after a month, it's a signal to scale your GEO efforts across the entire site, using programmatic generation for large projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Critical Risks and Honest Limitations
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Risk One: Manual Penalties.&lt;/strong&gt; Google is crystal clear: any discrepancy between your structured data and the visible content on the page is considered manipulation. Marking up a fake 5-star rating or an incorrect price is a guaranteed path to a penalty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk Two: No Absolute Guarantees.&lt;/strong&gt; GEO &lt;em&gt;significantly increases&lt;/em&gt; the probability of being cited, but it's not a 100% guarantee. AI systems use dozens of factors, and structured data is just one piece of the puzzle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk Three: Visibility Doesn't Always Equal Clicks.&lt;/strong&gt; This is a critical point. Many users will get their answer from the AI Overview and never click through to your site. Your brand gets mentioned, but it may not generate direct traffic.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Only 1% of users actually click on the links provided within an AI Overview as sources of information.&lt;/p&gt;

&lt;p&gt;Pew Research Center&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, remember the trade-off. Implementing high-quality structured markup is a significant investment of time and resources. The alternative, however, is a gradual and steady loss of visibility in an AI-driven world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion: The Investment in Future Visibility Begins Today
&lt;/h3&gt;

&lt;p&gt;The world of digital marketing has changed irrevocably. The way we search for information has changed fundamentally.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://xponent21.com/insights/googles-ai-overviews-surpass-50-of-queries-doubling-since-august-2024/" rel="noopener noreferrer"&gt;current data from Xponent21&lt;/a&gt;, Google's AI Overviews now appear in &lt;strong&gt;more than 50% of all search queries&lt;/strong&gt; —double the rate from just eight months ago. This trend will only intensify.&lt;/p&gt;

&lt;p&gt;The question is no longer &lt;em&gt;whether&lt;/em&gt; to adapt. The real question is: will you implement GEO before your competitors do? Because AI answers typically cite only three to five sources. &lt;strong&gt;The spots on this exclusive list are limited.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the task seems daunting, start small. Choose your ten most important pages. Implement basic, correct markup. Track your metrics. See the results for yourself.&lt;/p&gt;

&lt;p&gt;GEO is not a magic bullet. It is a technically sound and strategic way to communicate with machines in their native language of structured data. In a world where AI is the main intermediary between your content and your audience, mastering this language is no longer an option—it's a basic requirement for survival.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt; &lt;a href="https://suddo.io/tag/seo" rel="noopener noreferrer"&gt;#seo&lt;/a&gt; &lt;a href="https://suddo.io/tag/geo" rel="noopener noreferrer"&gt;#geo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>marketing</category>
      <category>chatgpt</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Red Hat confirmed a breach of its internal GitLab server</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Fri, 03 Oct 2025 08:13:47 +0000</pubDate>
      <link>https://forem.com/herasimau/red-hat-confirmed-a-breach-of-its-internal-gitlab-server-22m3</link>
      <guid>https://forem.com/herasimau/red-hat-confirmed-a-breach-of-its-internal-gitlab-server-22m3</guid>
      <description>&lt;p&gt;Red Hat &lt;a href="https://www.bleepingcomputer.com/news/security/red-hat-confirms-security-incident-after-hackers-breach-gitlab-instance/" rel="noopener noreferrer"&gt;announced&lt;/a&gt; a breach of the company's internal GitLab server. The ransomware group Crimson Collective claims to have stolen nearly 570 GB of data from 28,000 internal development repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivsl0i6egfvui23ghs1f.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivsl0i6egfvui23ghs1f.webp" alt="Image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This data allegedly includes about 800 customer engagement reports (CERs), which may contain sensitive information, including infrastructure details, configuration data, authentication tokens, etc., that could be used to breach customer networks.&lt;/p&gt;

&lt;p&gt;Initially, Red Hat stated it had encountered a security incident related to its consulting business. They noted that there was no reason to believe that this security issue affected any other services or products.&lt;/p&gt;

&lt;p&gt;The company has now &lt;a href="https://access.redhat.com/articles/7132207" rel="noopener noreferrer"&gt;confirmed&lt;/a&gt; that the security incident was related to a data leak from a GitLab instance used exclusively for Red Hat Consulting projects.&lt;/p&gt;

&lt;p&gt;The hackers themselves told BleepingComputer that they carried out the attack about two weeks ago. They allegedly found authentication tokens, full database URIs, and other sensitive information in Red Hat's code and CERs to gain access to customer infrastructure. The hacker group also published a full list of allegedly stolen GitLab repositories and a list of CERs from 2020 to 2025 on Telegram. It includes organizations and agencies such as Bank of America, T-Mobile, AT&amp;amp;T, Fidelity, Kaiser, Mayo Clinic, Walmart, Costco, the U.S. Naval Surface Warfare Center, the Federal Aviation Administration, the House of Representatives, and many others.&lt;/p&gt;

&lt;p&gt;The hackers stated that they tried to contact Red Hat but received no response other than a request to submit a vulnerability report to the security team. According to them, the created ticket was repeatedly forwarded to other individuals, including employees from the legal and security departments of the company.&lt;/p&gt;

&lt;p&gt;The company told BleepingComputer: 'Upon discovering the leak, we immediately launched a thorough investigation, revoked the unauthorized party's access, isolated the instance, and contacted the relevant authorities. Our ongoing investigation has shown that an unauthorized third party accessed and copied some data from this instance. We have implemented additional security measures designed to prevent further access and contain the issue.'&lt;/p&gt;

&lt;p&gt;Red Hat confirmed that the incident involved CER reports but noted that the documents generally do not contain personal information. The company is currently contacting the affected customers.&lt;/p&gt;

&lt;p&gt;GitLab reported that the platform or user accounts were not compromised, emphasizing that the incident only affected a self-managed Community Edition instance, and customers are responsible for the security of these installations.&lt;/p&gt;

&lt;p&gt;Previously, Red Hat introduced the Red Hat Enterprise Linux for Business Developers initiative for the free use of the Red Hat Enterprise Linux 10 distribution in enterprises for the purpose of developing and testing applications. Each participant in the Red Hat Developer program is given the opportunity to run up to 25 instances of the distribution in test environments for free.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt; &lt;a href="https://suddo.io/tag/security" rel="noopener noreferrer"&gt;#security&lt;/a&gt; &lt;a href="https://suddo.io/tag/databreach" rel="noopener noreferrer"&gt;#databreach&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>news</category>
      <category>opensource</category>
    </item>
    <item>
      <title>OpenSSL 3.6.0 Released</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Fri, 03 Oct 2025 05:43:34 +0000</pubDate>
      <link>https://forem.com/herasimau/openssl-360-released-4c5h</link>
      <guid>https://forem.com/herasimau/openssl-360-released-4c5h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo285gjiunj379tuvta3e.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo285gjiunj379tuvta3e.webp" alt="Image" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On October 1, 2025, the open-source project &lt;a href="https://www.openssl.org/" rel="noopener noreferrer"&gt;OpenSSL 3.6.0&lt;/a&gt; was &lt;a href="https://openssl-library.org/post/2025-10-01-3.6-release-announcement/" rel="noopener noreferrer"&gt;released&lt;/a&gt;. The cryptographic library supports new encryption and key management algorithms, works with SSL/TLS protocols at the Linux client kernel level, has an updated FIPS module, and has been integrated with the Certificate Management Protocol (CMP).&lt;/p&gt;

&lt;p&gt;The project's source code is written in C and Perl and is &lt;a href="https://github.com/openssl/openssl" rel="noopener noreferrer"&gt;distributed&lt;/a&gt; under the Apache 2.0 license. The release of OpenSSL 3.0.0 took place in September 2021. OpenSSL 3.4.0 was released at the end of 2024. OpenSSL 3.5.0 was introduced in April 2025.&lt;/p&gt;

&lt;p&gt;The OpenSSL 3.6 release is classified as a standard support build, with updates released for 13 months. The OpenSSL 3.5.0 release is classified as a Long-Term Support (LTS) release, for which updates will be released for 5 years (until April 2030). Support for previous branches OpenSSL 3.3, 3.2 and 3.0 LTS &lt;a href="https://www.openssl.org/policies/releasestrat.html" rel="noopener noreferrer"&gt;will last&lt;/a&gt; until April 2026, November 2025, and September 2026, respectively.&lt;/p&gt;

&lt;p&gt;According to OpenNET, the main refinements and &lt;a href="https://github.com/openssl/openssl/blob/master/NEWS.md&amp;lt;a%20href=" rel="noopener noreferrer"&gt;#openssl&lt;/a&gt;-3.6" rel="noopener noreferrer nofollow"&amp;gt;improvements in OpenSSL 3.6.0 are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/openssl/openssl/pull/28278" rel="noopener noreferrer"&gt;Added&lt;/a&gt; support for the &lt;a href="https://docs.openssl.org/master/man3/EVP_SKEY/" rel="noopener noreferrer"&gt;EVP_SKEY&lt;/a&gt; (&lt;a href="https://docs.openssl.org/master/man1/openssl-skeyutl/" rel="noopener noreferrer"&gt;Symmetric KEY&lt;/a&gt;) structure to represent symmetric keys as opaque objects. Unlike raw keys, which are represented by a byte array, the key structure in EVP_SKEY is abstracted and contains additional metadata. EVP_SKEY can be used in &lt;a href="https://github.com/openssl/openssl/pull/26702" rel="noopener noreferrer"&gt;encryption&lt;/a&gt;, key exchange, and &lt;a href="https://github.com/openssl/openssl/pull/28369" rel="noopener noreferrer"&gt;key derivation&lt;/a&gt; (&lt;a href="https://en.wikipedia.org/wiki/Key_derivation_function" rel="noopener noreferrer"&gt;KDF&lt;/a&gt;) functions. The functions EVP_KDF_CTX_set_SKEY(), EVP_KDF_derive_SKEY(), and EVP_PKEY_derive_SKEY() have been added to work with EVP_SKEY keys;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openssl/openssl/pull/22357" rel="noopener noreferrer"&gt;Added&lt;/a&gt; support for verifying digital signatures based on the &lt;a href="https://datatracker.ietf.org/doc/html/rfc8554" rel="noopener noreferrer"&gt;LMS&lt;/a&gt; (Leighton-Micali Signatures) scheme, which uses &lt;a href="https://en.wikipedia.org/wiki/Hash-based_cryptography" rel="noopener noreferrer"&gt;hash functions&lt;/a&gt; and tree-based hashing in the form of a Merkle Tree (each branch verifies all underlying branches and nodes). LMS digital signatures are resistant to quantum computer attacks and are designed to ensure the integrity of firmware and applications;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openssl/openssl/pull/27571" rel="noopener noreferrer"&gt;Added&lt;/a&gt; support for &lt;a href="https://csrc.nist.gov/glossary/term/security_category" rel="noopener noreferrer"&gt;NIST security categories&lt;/a&gt; for &lt;a href="https://docs.openssl.org/3.1/man1/openssl-pkey/" rel="noopener noreferrer"&gt;PKEY&lt;/a&gt; object parameters (public and private keys). The security category is set via the security-category setting. The EVP_PKEY_get_security_category() function has been added to check the security level. The security level reflects resistance to quantum computer attacks and can take integer values from 0 to 5: 0 - implementation not resistant to quantum computer attacks; 1/3/5 - implementation does not preclude a quantum computer search for a key in a block cipher with a 128/192/256-bit key; 2/4 - implementation does not preclude a quantum computer search for a collision in a 256/384-bit hash).&lt;/li&gt;
&lt;li&gt;0 - implementation not resistant to quantum computer attacks;&lt;/li&gt;
&lt;li&gt;1/3/5 - implementation does not preclude a quantum computer search for a key in a block cipher with a 128/192/256-bit key;&lt;/li&gt;
&lt;li&gt;2/4 - implementation does not preclude a quantum computer search for a collision in a 256/384-bit hash).&lt;/li&gt;
&lt;li&gt;Added the &lt;a href="https://docs.openssl.org/master/man1/openssl-configutl/" rel="noopener noreferrer"&gt;openssl configutl&lt;/a&gt; command to process configuration files. The utility allows generating a consolidated file with all settings from a multi-file configuration with include directives;&lt;/li&gt;
&lt;li&gt;Added support for deterministic ECDSA digital signature generation to the FIPS crypto provider (the same signature is generated for the same input data), in accordance with the FIPS 186-5 standard requirements;&lt;/li&gt;
&lt;li&gt;Increased build environment requirements. A toolchain with ANSI-C support is no longer sufficient to build OpenSSL; a C-99 compliant compiler is now required;&lt;/li&gt;
&lt;li&gt;Functions related to the &lt;a href="https://docs.openssl.org/3.4/man3/EVP_PKEY_ASN1_METHOD/" rel="noopener noreferrer"&gt;EVP_PKEY_ASN1_METHOD&lt;/a&gt; structure have been deprecated;&lt;/li&gt;
&lt;li&gt;Support for the VxWorks platform has been discontinued.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;The new version of the project &lt;a href="https://openssl-library.org/news/secadv/20250930.txt" rel="noopener noreferrer"&gt;fixes&lt;/a&gt; the following vulnerabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://security-tracker.debian.org/tracker/CVE-2025-9230" rel="noopener noreferrer"&gt;CVE-2025-9230&lt;/a&gt; — a vulnerability in the decryption code for CMS messages encrypted with a password (PWRI). The vulnerability can lead to an out-of-bounds write and read, allowing an attacker to cause a crash or memory corruption in an application that uses OpenSSL to process CMS messages. Exploitation for code execution is not ruled out, but the severity of the issue is reduced by the fact that password-based encryption of CMS messages is very rarely used in practice. In addition to OpenSSL 3.6.0, the vulnerability is fixed in OpenSSL releases 3.5.4, 3.4.3, 3.3.5, 3.2.6, and 3.0.18. The issue has also been &lt;a href="https://www.mail-archive.com/announce@openbsd.org/msg00565.html" rel="noopener noreferrer"&gt;fixed&lt;/a&gt; in updates to LibreSSL 4.0.1 and 4.1.1, developed by the OpenBSD project;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://security-tracker.debian.org/tracker/CVE-2025-9231" rel="noopener noreferrer"&gt;CVE-2025-9231&lt;/a&gt; — the implementation of the SM2 algorithm is vulnerable to a side-channel attack that allows an attacker on systems with 64-bit ARM CPUs to reconstruct the private key by analyzing timing variations of specific computations. The attack could potentially be carried out remotely. The severity of the attack is reduced by the fact that OpenSSL does not directly support the use of certificates with SM2 keys in TLS;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://security-tracker.debian.org/tracker/CVE-2025-9232" rel="noopener noreferrer"&gt;CVE-2025-9232&lt;/a&gt; — a vulnerability in the built-in HTTP client implementation that leads to an out-of-bounds read when processing a specially crafted URL in HTTP Client functions. The issue only manifests when the no_proxy environment variable is set and can lead to an application crash.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/openssl" rel="noopener noreferrer"&gt;#openssl&lt;/a&gt; &lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt;&lt;/p&gt;

</description>
      <category>news</category>
      <category>opensource</category>
      <category>security</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Modern Node.js Patterns (2025)</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Thu, 02 Oct 2025 20:38:55 +0000</pubDate>
      <link>https://forem.com/herasimau/modern-nodejs-patterns-2025-2l0p</link>
      <guid>https://forem.com/herasimau/modern-nodejs-patterns-2025-2l0p</guid>
      <description>&lt;p&gt;Node.js has undergone an impressive transformation since its inception. If you've been writing Node.js for several years, you've likely witnessed this evolution yourself - from the era of callbacks and the widespread use of CommonJS to a modern, clean, and standardized approach to development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxjdqpoh4gdfhpmrfo61.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxjdqpoh4gdfhpmrfo61.webp" alt="Image" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The changes have affected more than just the appearance - it's a fundamental shift in the very approach to server-side JavaScript development. Modern Node.js relies on web standards, reduces dependency on external libraries, and offers a more understandable and pleasant experience for developers.&lt;/p&gt;

&lt;p&gt;Let's explore what these changes are and why they are important for your applications in 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Module System: ESM - The New Standard
&lt;/h2&gt;

&lt;p&gt;The module system is perhaps the most noticeable area of change. CommonJS served us faithfully for a long time, but now ES Modules (ESM) have become the clear winner, offering better tooling support and compliance with web standards.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Old Way (CommonJS)
&lt;/h3&gt;

&lt;p&gt;Previously, we organized modules like this. This approach required explicit exports and synchronous imports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// math.js
function add(a, b) {
  return a + b;
}
module.exports = { add };

// app.js
const { add } = require('./math');
console.log(add(2, 3));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This worked well enough, but it had its limitations: there was no support for static analysis, tree-shaking (removing unused code), and this approach did not align with browser standards.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Modern Approach (ES Modules with the Node: Prefix)
&lt;/h3&gt;

&lt;p&gt;Modern Node.js development relies on ES modules with an important addition - the &lt;code&gt;node:&lt;/code&gt; prefix for built-in modules. This explicit declaration helps avoid confusion and makes dependencies crystal clear:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// math.js
export function add(a, b) {
  return a + b;
}

// app.js
import { add } from './math.js';
import { readFile } from 'node:fs/promises'; // Modern node: prefix
import { createServer } from 'node:http';

console.log(add(2, 3));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;node:&lt;/code&gt; prefix is not just a convention. It's an explicit signal to both developers and tools that you are importing built-in Node.js modules, not packages from npm.&lt;br&gt;&lt;br&gt;
 This helps avoid potential conflicts and makes code dependencies more transparent.&lt;/p&gt;
&lt;h3&gt;
  
  
  Top-Level Await: Simplifying Initialization
&lt;/h3&gt;

&lt;p&gt;One of the most revolutionary features is &lt;code&gt;await&lt;/code&gt; at the top level of a module.&lt;br&gt;&lt;br&gt;
 You no longer need to wrap your entire application in an &lt;code&gt;async&lt;/code&gt; function just to use &lt;code&gt;await&lt;/code&gt; at the start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app.js - Clean initialization without wrapper functions
import { readFile } from 'node:fs/promises';

const config = JSON.parse(await readFile('config.json', 'utf8'));
const server = createServer(/* ... */);

console.log('App started with config:', config.appName);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This eliminates the common pattern of immediately-invoked async function expressions (IIFE) that was once ubiquitous. Now, your code becomes more linear and understandable.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Built-in Web APIs: Fewer External Dependencies
&lt;/h2&gt;

&lt;p&gt;Node.js has seriously embraced web standards, integrating APIs familiar to web developers into its runtime. This means fewer external dependencies and more consistency between execution environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fetch API: No More Third-Party Libraries for HTTP Requests
&lt;/h3&gt;

&lt;p&gt;Remember the days when every project required &lt;code&gt;axios&lt;/code&gt;, &lt;code&gt;node-fetch&lt;/code&gt;, or similar libraries for HTTP handling? Those days are over. Node.js now includes the Fetch API by default:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Old way - external dependencies required
const axios = require('axios');
const response = await axios.get('https://api.example.com/data');

// Modern way - built-in fetch with enhanced features
const response = await fetch('https://api.example.com/data');
const data = await response.json();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But the modern approach is not just about replacing your HTTP library. You also get built-in support for timeouts and request cancellation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function fetchData(url) {
  try {
    const response = await fetch(url, {
      signal: AbortSignal.timeout(5000) // Built-in timeout support
    });

    if (!response.ok) {
      throw new Error(`HTTP ${response.status}: ${response.statusText}`);
    }

    return await response.json();
  } catch (error) {
    if (error.name === 'TimeoutError') {
      throw new Error('Request timed out');
    }
    throw error;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach eliminates the need for third-party libraries for timeouts and provides a single, predictable error handling mechanism. The &lt;code&gt;AbortSignal.timeout()&lt;/code&gt; method is particularly elegant - it creates a signal that automatically aborts an operation after a specified time.&lt;/p&gt;

&lt;h3&gt;
  
  
  AbortController: Graceful Operation Cancellation
&lt;/h3&gt;

&lt;p&gt;Modern applications must be able to handle operation cancellations gracefully - whether initiated by the user or due to a timeout. &lt;code&gt;AbortController&lt;/code&gt; provides a standardized way to cancel operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Cancel long-running operations cleanly
const controller = new AbortController();

// Set up automatic cancellation
setTimeout(() =&amp;gt; controller.abort(), 10000);

try {
  const data = await fetch('https://slow-api.com/data', {
    signal: controller.signal
  });
  console.log('Data received:', data);
} catch (error) {
  if (error.name === 'AbortError') {
    console.log('Request was cancelled - this is expected behavior');
  } else {
    console.error('Unexpected error:', error);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach works across many Node.js APIs, not just with &lt;code&gt;fetch&lt;/code&gt;. You can use the same &lt;code&gt;AbortController&lt;/code&gt; for file operations, database queries, and any other asynchronous operations that support cancellation.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Built-in Testing: A Professional Approach Without External Dependencies
&lt;/h2&gt;

&lt;p&gt;Previously, testing meant choosing between Jest, Mocha, Ava, and other frameworks. Now, Node.js has a full-featured built-in testing environment, or test runner, that covers most needs without additional dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modern Testing with the Built-in Node.js Test Runner
&lt;/h3&gt;

&lt;p&gt;The built-in test runner offers a clean and clear API that feels modern and is fully functional:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// test/math.test.js
import { test, describe } from 'node:test';
import assert from 'node:assert';
import { add, multiply } from '../math.js';

describe('Math functions', () =&amp;gt; {
  test('adds numbers correctly', () =&amp;gt; {
    assert.strictEqual(add(2, 3), 5);
  });

  test('handles async operations', async () =&amp;gt; {
    const result = await multiply(2, 3);
    assert.strictEqual(result, 6);
  });

  test('throws on invalid input', () =&amp;gt; {
    assert.throws(() =&amp;gt; add('a', 'b'), /Invalid input/);
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What makes this tool particularly powerful is its seamless integration with the Node.js development process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Run all tests with built-in runner
node --test

# Watch mode for development
node --test --watch

# Coverage reporting (Node.js 20+)
node --test --experimental-test-coverage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch mode is especially valuable during development - tests automatically restart when code changes, providing instant feedback without additional setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Advanced Asynchronous Patterns
&lt;/h2&gt;

&lt;p&gt;Although &lt;code&gt;async/await&lt;/code&gt; is not new, its usage patterns have evolved significantly. Modern Node.js development effectively uses these patterns, combining them with new APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Async/Await with Enhanced Error Handling
&lt;/h3&gt;

&lt;p&gt;The modern approach to error handling combines &lt;code&gt;async/await&lt;/code&gt; with flexible recovery and parallel execution strategies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { readFile, writeFile } from 'node:fs/promises';

async function processData() {
  try {
    // Parallel execution of independent operations
    const [config, userData] = await Promise.all([
      readFile('config.json', 'utf8'),
      fetch('/api/user').then(r =&amp;gt; r.json())
    ]);

    const processed = processUserData(userData, JSON.parse(config));
    await writeFile('output.json', JSON.stringify(processed, null, 2));

    return processed;
  } catch (error) {
    // Structured error logging with context
    console.error('Processing failed:', {
      error: error.message,
      stack: error.stack,
      timestamp: new Date().toISOString()
    });
    throw error;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern combines parallel execution for improved performance with centralized and detailed error handling. &lt;code&gt;Promise.all()&lt;/code&gt; ensures independent operations run simultaneously, while &lt;code&gt;try/catch&lt;/code&gt; allows you to handle all possible errors in one place with full context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modern Event Handling with AsyncIterator
&lt;/h3&gt;

&lt;p&gt;Event-driven programming has moved beyond simple handlers (&lt;code&gt;on&lt;/code&gt;, &lt;code&gt;addListener&lt;/code&gt;). &lt;code&gt;AsyncIterator&lt;/code&gt; provides a more powerful way to process event streams:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { EventEmitter, once } from 'node:events';

class DataProcessor extends EventEmitter {
  async *processStream() {
    for (let i = 0; i &amp;lt; 10; i++) {
      this.emit('data', `chunk-${i}`);
      yield `processed-${i}`;
      // Simulate async processing time
      await new Promise(resolve =&amp;gt; setTimeout(resolve, 100));
    }
    this.emit('end');
  }
}

// Consume events as an async iterator
const processor = new DataProcessor();
for await (const result of processor.processStream()) {
  console.log('Processed:', result);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach is particularly powerful because it combines the flexibility of events with a controlled execution flow through asynchronous iteration. You can process events sequentially, naturally handle backpressure, and cleanly break the processing loop when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Advanced Streams with Web Standards Integration
&lt;/h2&gt;

&lt;p&gt;Streams remain one of the most powerful features of Node.js,&lt;br&gt;&lt;br&gt;
 but they have now evolved to support web standards and improve compatibility with other environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Readable, Transform } from 'node:stream';
import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';

// Create transform streams with clean, focused logic
const upperCaseTransform = new Transform({
  objectMode: true,
  transform(chunk, encoding, callback) {
    this.push(chunk.toString().toUpperCase());
    callback();
  }
});

// Process files with robust error handling
async function processFile(inputFile, outputFile) {
  try {
    await pipeline(
      createReadStream(inputFile),
      upperCaseTransform,
      createWriteStream(outputFile)
    );
    console.log('File processed successfully');
  } catch (error) {
    console.error('Pipeline failed:', error);
    throw error;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;pipeline&lt;/code&gt; function with promise support ensures automatic resource cleanup and error handling, eliminating many of the traditional complexities of working with streams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compatibility with Web Streams
&lt;/h3&gt;

&lt;p&gt;Modern Node.js can work seamlessly with Web Streams, providing better compatibility with browser code and edge runtimes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create a Web Stream (compatible with browsers)
const webReadable = new ReadableStream({
  start(controller) {
    controller.enqueue('Hello ');
    controller.enqueue('World!');
    controller.close();
  }
});

// Convert between Web Streams and Node.js streams
const nodeStream = Readable.fromWeb(webReadable);
const backToWeb = Readable.toWeb(nodeStream);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This compatibility is especially important for applications that need to run in different execution environments or share code between the server and the client.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Worker Threads: True Parallelism for CPU-Intensive Tasks
&lt;/h2&gt;

&lt;p&gt;The single-threaded nature of JavaScript is not always suitable - especially when it comes to heavy CPU computations. Worker threads allow you to efficiently use multiple processor cores while maintaining the simplicity of JavaScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-Blocking Background Processing
&lt;/h3&gt;

&lt;p&gt;Worker threads are ideal for CPU-intensive tasks that would otherwise block the main event loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// worker.js - Isolated computation environment
import { parentPort, workerData } from 'node:worker_threads';

function fibonacci(n) {
  if (n &amp;lt; 2) return n;
  return fibonacci(n - 1) + fibonacci(n - 2);
}

const result = fibonacci(workerData.number);
parentPort.postMessage(result);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main application can now delegate heavy computations without blocking other operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// main.js - Non-blocking delegation
import { Worker } from 'node:worker_threads';
import { fileURLToPath } from 'node:url';

async function calculateFibonacci(number) {
  return new Promise((resolve, reject) =&amp;gt; {
    const worker = new Worker(
      fileURLToPath(new URL('./worker.js', import.meta.url)),
      { workerData: { number } }
    );

    worker.on('message', resolve);
    worker.on('error', reject);
    worker.on('exit', (code) =&amp;gt; {
      if (code !== 0) {
        reject(new Error(`Worker stopped with exit code ${code}`));
      }
    });
  });
}

// Your main application remains responsive
console.log('Starting calculation...');
const result = await calculateFibonacci(40);
console.log('Fibonacci result:', result);
console.log('Application remained responsive throughout!');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach allows your application to use multiple processor cores&lt;br&gt;&lt;br&gt;
 while preserving the familiar programming model with &lt;code&gt;async/await&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  7. Improved Developer Experience
&lt;/h2&gt;

&lt;p&gt;Modern Node.js prioritizes developer convenience by offering built-in tools that previously required external packages or complex setup.&lt;/p&gt;
&lt;h3&gt;
  
  
  Watch Mode and Environment Variable Management
&lt;/h3&gt;

&lt;p&gt;The development workflow has become much simpler thanks to the built-in watch mode and support for &lt;code&gt;.env&lt;/code&gt; files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  name: modern-node-app,
  type: module,
  engines: {
    node: &amp;gt;=20.0.0
  },
  scripts: {
    dev: node --watch --env-file=.env app.js,
    test: node --test --watch,
    start: node app.js
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--watch&lt;/code&gt; flag eliminates the need for &lt;code&gt;nodemon&lt;/code&gt;, and &lt;code&gt;--env-file&lt;/code&gt; gets rid of the dependency on &lt;code&gt;dotenv&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As a result, your development environment becomes simpler and faster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// .env file automatically loaded with --env-file
// DATABASE_URL=postgres://localhost:5432/mydb
// API_KEY=secret123

// app.js - Environment variables available immediately
console.log('Connecting to:', process.env.DATABASE_URL);
console.log('API Key loaded:', process.env.API_KEY ? 'Yes' : 'No');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These features make development more comfortable by reducing configuration overhead and eliminating the need for constant restarts.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Modern Security and Performance Monitoring
&lt;/h2&gt;

&lt;p&gt;Security and performance are now first-class citizens in Node.js, with built-in tools for monitoring and managing application behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Permission Model for Enhanced Security
&lt;/h3&gt;

&lt;p&gt;The experimental permission model allows you to restrict an application's access to various resources, following the principle of least privilege:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Run with restricted file system access
node --experimental-permission --allow-fs-read=./data --allow-fs-write=./logs app.js

# Network restrictions
node --experimental-permission --allow-net=api.example.com app.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is especially important for applications that handle untrusted code&lt;br&gt;&lt;br&gt;
 or must comply with information security requirements.&lt;/p&gt;
&lt;h3&gt;
  
  
  Built-in Performance Monitoring
&lt;/h3&gt;

&lt;p&gt;Performance monitoring is now built directly into the platform, eliminating the need for external tools to monitor processes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { PerformanceObserver, performance } from 'node:perf_hooks';

// Set up automatic performance monitoring
const obs = new PerformanceObserver((list) =&amp;gt; {
  for (const entry of list.getEntries()) {
    if (entry.duration &amp;gt; 100) { // Log slow operations
      console.log(`Slow operation detected: ${entry.name} took ${entry.duration}ms`);
    }
  }
});
obs.observe({ entryTypes: ['function', 'http', 'dns'] });

// Instrument your own operations
async function processLargeDataset(data) {
  performance.mark('processing-start');

  const result = await heavyProcessing(data);

  performance.mark('processing-end');
  performance.measure('data-processing', 'processing-start', 'processing-end');

  return result;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to track application performance without external dependencies, helping to identify bottlenecks early in the development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Application Distribution and Deployment
&lt;/h2&gt;

&lt;p&gt;Modern Node.js simplifies the application distribution process&lt;br&gt;&lt;br&gt;
 thanks to features like single executable application builds and improved packaging.&lt;/p&gt;
&lt;h3&gt;
  
  
  Single Executable Applications
&lt;/h3&gt;

&lt;p&gt;You can now bundle a Node.js application into a single executable file, which simplifies deployment and distribution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a self-contained executable
node --experimental-sea-config sea-config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration file defines how to build your application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  main: app.js,
  output: my-app-bundle.blob,
  disableExperimentalSEAWarning: true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is particularly useful for CLI tools, desktop applications, or any case&lt;br&gt;&lt;br&gt;
 where you want to distribute your application without requiring a separate Node.js installation.&lt;/p&gt;
&lt;h2&gt;
  
  
  10. Modern Error Handling and Diagnostics
&lt;/h2&gt;

&lt;p&gt;Error handling has evolved beyond simple &lt;code&gt;try/catch&lt;/code&gt; blocks to include structured handling and advanced diagnostic tools.&lt;/p&gt;
&lt;h3&gt;
  
  
  Structured Error Handling
&lt;/h3&gt;

&lt;p&gt;Modern applications benefit from contextual and structured error handling, which provides better insight and debugging for problems:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class AppError extends Error {
  constructor(message, code, statusCode = 500, context = {}) {
    super(message);
    this.name = 'AppError';
    this.code = code;
    this.statusCode = statusCode;
    this.context = context;
    this.timestamp = new Date().toISOString();
  }

  toJSON() {
    return {
      name: this.name,
      message: this.message,
      code: this.code,
      statusCode: this.statusCode,
      context: this.context,
      timestamp: this.timestamp,
      stack: this.stack
    };
  }
}

// Usage with rich context
throw new AppError(
  'Database connection failed',
  'DB_CONNECTION_ERROR',
  503,
  { host: 'localhost', port: 5432, retryAttempt: 3 }
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach provides much more detailed error information for debugging and monitoring while maintaining a consistent error handling interface throughout the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Diagnostics
&lt;/h3&gt;

&lt;p&gt;Node.js includes advanced diagnostic tools that allow you to understand what is happening inside your application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import diagnostics_channel from 'node:diagnostics_channel';

// Create custom diagnostic channels
const dbChannel = diagnostics_channel.channel('app:database');
const httpChannel = diagnostics_channel.channel('app:http');

// Subscribe to diagnostic events
dbChannel.subscribe((message) =&amp;gt; {
  console.log('Database operation:', {
    operation: message.operation,
    duration: message.duration,
    query: message.query
  });
});

// Publish diagnostic information
async function queryDatabase(sql, params) {
  const start = performance.now();

  try {
    const result = await db.query(sql, params);

    dbChannel.publish({
      operation: 'query',
      sql,
      params,
      duration: performance.now() - start,
      success: true
    });

    return result;
  } catch (error) {
    dbChannel.publish({
      operation: 'query',
      sql,
      params,
      duration: performance.now() - start,
      success: false,
      error: error.message
    });
    throw error;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This diagnostic data can be sent to monitoring systems, saved in logs for analysis, or used for automated responses to issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Modern Package Management and Module Resolution
&lt;/h2&gt;

&lt;p&gt;Dependency management and module resolution have become more flexible and advanced,&lt;br&gt;&lt;br&gt;
 with improved support for monorepos, internal packages, and a flexible import scheme.&lt;/p&gt;
&lt;h3&gt;
  
  
  Import Maps and Internal Module Resolution
&lt;/h3&gt;

&lt;p&gt;Modern Node.js supports import maps, allowing you to create clean and understandable references to internal modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  imports: {
    #config: ./src/config/index.js,
    #utils/*: ./src/utils/*.js,
    #db: ./src/database/connection.js
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a clean and stable interface for internal modules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Clean internal imports that don't break when you reorganize
import config from '#config';
import { logger, validator } from '#utils/common';
import db from '#db';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Such internal imports simplify refactoring and allow for a clear distinction between internal and external dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic Imports for Flexible Loading
&lt;/h3&gt;

&lt;p&gt;Dynamic imports allow for the implementation of complex loading patterns, including conditional loading and code splitting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Load features based on configuration or environment
async function loadDatabaseAdapter() {
  const dbType = process.env.DATABASE_TYPE || 'sqlite';

  try {
    const adapter = await import(`#db/adapters/${dbType}`);
    return adapter.default;
  } catch (error) {
    console.warn(`Database adapter ${dbType} not available, falling back to sqlite`);
    const fallback = await import('#db/adapters/sqlite');
    return fallback.default;
  }
}

// Conditional feature loading
async function loadOptionalFeatures() {
  const features = [];

  if (process.env.ENABLE_ANALYTICS === 'true') {
    const analytics = await import('#features/analytics');
    features.push(analytics.default);
  }

  if (process.env.ENABLE_MONITORING === 'true') {
    const monitoring = await import('#features/monitoring');
    features.push(monitoring.default);
  }

  return features;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach allows you to create applications that adapt to their runtime environment and load only the code that is truly necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Forward to the Future: Key Ideas of Modern Node.js (2025)
&lt;/h2&gt;

&lt;p&gt;Looking at the current state of Node.js development, we can identify several key principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Focus on web standards: use &lt;code&gt;node:&lt;/code&gt; prefixes, &lt;code&gt;fetch&lt;/code&gt;, &lt;code&gt;AbortController&lt;/code&gt;, and Web Streams for better compatibility and fewer dependencies&lt;/li&gt;
&lt;li&gt;Use built-in tools: the test runner, watch mode, and &lt;code&gt;.env&lt;/code&gt; file support reduce reliance on third-party packages and simplify configuration&lt;/li&gt;
&lt;li&gt;Think in terms of modern async patterns: &lt;code&gt;top-level await&lt;/code&gt;, structured error handling, and &lt;code&gt;async iterators&lt;/code&gt; make code cleaner and easier to maintain&lt;/li&gt;
&lt;li&gt;Strategically apply worker threads: for CPU-intensive tasks, worker threads provide true parallelism without blocking the main thread&lt;/li&gt;
&lt;li&gt;Use the platform's progressive features: permission models, diagnostic channels, and built-in monitoring help create reliable and observable applications&lt;/li&gt;
&lt;li&gt;Optimize the developer experience: watch mode, built-in testing, and import maps make the development process more enjoyable&lt;/li&gt;
&lt;li&gt;Prepare for distribution: building single executable files and modern packaging simplify deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The transformation of Node.js - from a simple JavaScript runtime to a full-fledged development platform - is impressive. By using modern approaches, you are not just writing 'trendy' code; you are building maintainable, performant, and JavaScript ecosystem-compatible applications.&lt;/p&gt;

&lt;p&gt;The beauty of modern Node.js is that it evolves while maintaining backward compatibility. These patterns can be adopted gradually, and they work perfectly alongside existing code. Whether it's a new project or modernizing an old one, you get a clear path to more reliable and modern Node.js development.&lt;/p&gt;

&lt;p&gt;As we move through 2025, Node.js continues to evolve, but the patterns discussed here already provide a solid foundation for building modern and resilient applications for years to come.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/nodejs" rel="noopener noreferrer"&gt;#nodejs&lt;/a&gt;&lt;/p&gt;

</description>
      <category>backend</category>
      <category>javascript</category>
      <category>node</category>
      <category>designpatterns</category>
    </item>
    <item>
      <title>Granite 4: IBM introduces a line of small but fast LLMs</title>
      <dc:creator>Leanid Herasimau</dc:creator>
      <pubDate>Thu, 02 Oct 2025 19:22:05 +0000</pubDate>
      <link>https://forem.com/herasimau/granite-4-ibm-introduces-a-line-of-small-but-fast-llms-oad</link>
      <guid>https://forem.com/herasimau/granite-4-ibm-introduces-a-line-of-small-but-fast-llms-oad</guid>
      <description>&lt;p&gt;While OpenAI, Anthropic, and Meta are competing with billions of parameters, IBM has suddenly decided to play a different game by introducing Granite-4.0 — a set of small but nimble LLMs.&lt;/p&gt;

&lt;p&gt;Instead of giants with hundreds of billions of parameters, IBM has rolled out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Micro (3B) — an ultra-lightweight version that can easily run on a laptop.&lt;/li&gt;
&lt;li&gt;Tiny (7B/1B active) — a compact MoE that saves memory and tokens.&lt;/li&gt;
&lt;li&gt;Small (32B/9B active) — the largest in the series, but still a small one compared to top-tier LLMs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9t8hf0x5qn20jh4whdvj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9t8hf0x5qn20jh4whdvj.webp" alt="Image" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key feature of this model series is the hybrid Mamba architecture: the model deactivates unnecessary blocks and runs faster, while maintaining a long context (up to 128K).&lt;/p&gt;

&lt;p&gt;Perhaps this "reverse move" by IBM will become the new trend: fewer parameters, but more practical utility?&lt;/p&gt;

&lt;p&gt;Granite-4.0 H-Small and Micro surprisingly outperform giants like Llama-3.3-70B and Qwen3-8B in Retrieval-Augmented Generation (73 and 72 versus 61 and 55). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49mcq1sk1edkukyf60m9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49mcq1sk1edkukyf60m9.webp" alt="Image" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;H-Micro and H-Tiny occupy the top part of the efficiency chart: they maintain an accuracy above 70% with very modest VRAM requirements. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzp1mkwhqyvx1xsgb4eq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzp1mkwhqyvx1xsgb4eq.webp" alt="Image" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Granite-4.0 H-Small with a score of 0.86 on IF-Eval is approaching top models like Llama 4 Maverick and Kimi K2, while Micro holds a solid position in the middle of the table alongside Mistral and OLMo. For models of this size, this is a very serious statement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvdpsxf222ypv7w11p7g.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvdpsxf222ypv7w11p7g.webp" alt="Image" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By the way, these models &lt;a href="https://blog.continue.dev/granite-4-models-available-on-continue/" rel="noopener noreferrer"&gt;are already available&lt;/a&gt; in Continue. The models are on &lt;a href="https://huggingface.co/collections/unsloth/granite-40-68ddf64b4a8717dc22a9322d" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://suddo.io/tag/news" rel="noopener noreferrer"&gt;#news&lt;/a&gt; &lt;a href="https://suddo.io/tag/ibm" rel="noopener noreferrer"&gt;#ibm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>news</category>
      <category>performance</category>
      <category>ai</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
