<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: TusharIbtekar</title>
    <description>The latest articles on Forem by TusharIbtekar (@ibtekar).</description>
    <link>https://forem.com/ibtekar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ibtekar"/>
    <language>en</language>
    <item>
      <title>Understanding Goroutines, Concurrency, and Scheduling in Go</title>
      <dc:creator>TusharIbtekar</dc:creator>
      <pubDate>Wed, 06 Aug 2025 19:03:48 +0000</pubDate>
      <link>https://forem.com/ibtekar/understanding-goroutines-concurrency-and-scheduling-in-go-2ooe</link>
      <guid>https://forem.com/ibtekar/understanding-goroutines-concurrency-and-scheduling-in-go-2ooe</guid>
      <description>&lt;p&gt;If we talk about go, one of the most powerful features that go gives us is probably Go’s concurrency. But what exactly happens under the hood when we spawn goroutines? Specially on modern multi-core processors? Let’s dive deep: &lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency vs Parallelism
&lt;/h2&gt;

&lt;p&gt;Before diving into Go’s internal, let’s get this out of the way -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Concurrency&lt;/strong&gt; is the ability to structure a program as independently executing tasks. These tasks may not actually run at the same time, but they are designed to make progress independently. Concurrency, along with context switching, gives us a flavor of parallelism, where it makes us think processes are running at the same time, but when in actual case, it’s just jumping between processes while saving their states in Process Control Block(PCB) or Thread Control Block (TCB). How our CPU does this, is a tale of another day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallelism&lt;/strong&gt; means actually executing multiple tasks simultaneously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go enables &lt;strong&gt;concurrency by default&lt;/strong&gt; via goroutines. Whether this results in parallelism depends on our hardware and the Go runtime’s configuration via &lt;code&gt;GOMAXPROCS&lt;/code&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Goroutine
&lt;/h2&gt;

&lt;p&gt;Let’s lift the lid. What exactly is a goroutine? Its a lightweight, user-managed thread of execution. But here’s the catch, this is not like OS threads. Key differences are below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extremely cheap to create (initial stack ~2KB)&lt;/li&gt;
&lt;li&gt;Scheduled cooperatively by the Go runtime, not by the OS&lt;/li&gt;
&lt;li&gt;Capable of scaling into the millions without overwhelming the system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we write:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;doWork&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re instructing the Go scheduler to start a new goroutine that will run &lt;code&gt;doWork&lt;/code&gt; concurrently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go’s Scheduler: G, M, P Model
&lt;/h2&gt;

&lt;p&gt;Go uses &lt;strong&gt;M:N Scheduler&lt;/strong&gt;. Many goroutines(&lt;code&gt;G&lt;/code&gt;) are multiplexed onto a smaller number of OS threads(&lt;code&gt;M&lt;/code&gt;), which are coordinated using logical processors(&lt;code&gt;P&lt;/code&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  Components:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;G:&lt;/strong&gt; Goroutine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;M&lt;/strong&gt;: Machine (an actual OS thread)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;P&lt;/strong&gt;: Processor (logical context needed to execute Go code)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only an M with an associated P can execute go code.&lt;/p&gt;

&lt;h2&gt;
  
  
  High-Level Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7xdn365hj58i2r94ucw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7xdn365hj58i2r94ucw.png" alt=" " width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How They Work Together
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Each &lt;strong&gt;P&lt;/strong&gt; manages a &lt;strong&gt;local queue&lt;/strong&gt; of runnable goroutines.&lt;/li&gt;
&lt;li&gt;Each &lt;strong&gt;P&lt;/strong&gt; is attached to &lt;strong&gt;at most one M&lt;/strong&gt; (OS thread) at a time.&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;M&lt;/strong&gt; runs one goroutine (&lt;code&gt;G&lt;/code&gt;) at a time.&lt;/li&gt;
&lt;li&gt;If a goroutine blocks (e.g. on I/O), the M detaches, and the P is reassigned to another available M to continue execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Happens When We Start Many Goroutines?
&lt;/h2&gt;

&lt;p&gt;Suppose, we are spawning 100,000 goroutines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="m"&gt;1000000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;doWork&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a machine with 16 logical CPUs (Logical CPUs are what we know as CPU threads - like 8 core processor has 16 threads), Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initializes 16 &lt;code&gt;P&lt;/code&gt;s (by default &lt;code&gt;GOMAXPROCS&lt;/code&gt; = 16)&lt;/li&gt;
&lt;li&gt;Creates some &lt;code&gt;M&lt;/code&gt;s (OS threads) to execute goroutines&lt;/li&gt;
&lt;li&gt;Distributes goroutines to the &lt;code&gt;P&lt;/code&gt;s’ local run queues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each &lt;code&gt;P&lt;/code&gt; runs one goroutine at a time using an &lt;code&gt;M&lt;/code&gt;. As goroutines block or finish, the &lt;code&gt;P&lt;/code&gt; selects the next goroutine in its queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency Through Context Switching
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Context Switching Explained
&lt;/h3&gt;

&lt;p&gt;Since the number of goroutines is often much greater than the number of available &lt;code&gt;P&lt;/code&gt;s or CPU threads, Go uses &lt;strong&gt;context switching&lt;/strong&gt; to simulate concurrent execution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a goroutine blocks (e.g., on I/O or a channel), it is paused.&lt;/li&gt;
&lt;li&gt;The scheduler saves its state (program counter, stack pointer, etc.), also known as TCB&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;P&lt;/code&gt; picks another runnable goroutine and resumes it.&lt;/li&gt;
&lt;li&gt;All of this is done in &lt;strong&gt;user space&lt;/strong&gt;, without needing a system call, making it fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Single Core Example
&lt;/h3&gt;

&lt;p&gt;Even with just one CPU core:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only one goroutine can run at a time.&lt;/li&gt;
&lt;li&gt;Go scheduler switches between goroutines, giving the illusion of concurrency.&lt;/li&gt;
&lt;li&gt;This is achieved by cooperative and preemptive scheduling, context switching rapidly between runnable goroutines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ratios and Limits
&lt;/h2&gt;

&lt;h3&gt;
  
  
  P:M (Processor to Thread)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1:1 at a time&lt;/strong&gt;: A &lt;code&gt;P&lt;/code&gt; is bound to one &lt;code&gt;M&lt;/code&gt; (OS thread) at a time.&lt;/li&gt;
&lt;li&gt;If an &lt;code&gt;M&lt;/code&gt; blocks, the Go scheduler finds another idle &lt;code&gt;M&lt;/code&gt; to attach the &lt;code&gt;P&lt;/code&gt; to.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  P:G (Processor to Goroutines)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1:many&lt;/strong&gt;: Each &lt;code&gt;P&lt;/code&gt; maintains a queue of many runnable &lt;code&gt;G&lt;/code&gt;s.&lt;/li&gt;
&lt;li&gt;Only one runs at a time on the &lt;code&gt;P&lt;/code&gt;, but others wait in the queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  M:G (Thread to Goroutines)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1:1 at a time&lt;/strong&gt;: Each &lt;code&gt;M&lt;/code&gt; executes one goroutine at a time.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;M&lt;/code&gt; is not aware of the queue — the &lt;code&gt;P&lt;/code&gt; hands it a goroutine to run.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Work Stealing and Global Queue
&lt;/h2&gt;

&lt;p&gt;If a &lt;code&gt;P&lt;/code&gt;’s local queue is empty, it can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Steal work&lt;/strong&gt; from the queue of another &lt;code&gt;P&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull work&lt;/strong&gt; from the global run queue (used as a fallback).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This ensures that all processors stay busy and that goroutines are distributed evenly across available resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parallelism with Multi-Core CPUs
&lt;/h2&gt;

&lt;p&gt;On a processor that has 8 cores and 16 logical cores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go sets &lt;code&gt;GOMAXPROCS = 16&lt;/code&gt; by default.&lt;/li&gt;
&lt;li&gt;This means up to &lt;strong&gt;16 goroutines can be running in true parallel&lt;/strong&gt; at any moment — one per logical core.&lt;/li&gt;
&lt;li&gt;The rest of the goroutines are scheduled cooperatively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So Go programs benefit from both &lt;strong&gt;parallelism (when hardware allows)&lt;/strong&gt; and &lt;strong&gt;concurrency (even when limited to one core)&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Whether we are running Go on a Raspberry Pi or a 32-core server, the same model adapts gracefully, letting us write clean concurrent code without worrying about locks, thread pools, or race conditions (beyond correctness).&lt;/p&gt;

&lt;p&gt;If you're curious to dig deeper, tools like &lt;code&gt;runtime/trace&lt;/code&gt;, &lt;code&gt;pprof&lt;/code&gt;, and &lt;code&gt;go tool trace&lt;/code&gt; can help you visualize how goroutines behave during execution.&lt;/p&gt;

</description>
      <category>go</category>
      <category>goroutines</category>
      <category>webdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Building a High-Performance Real-Time Chart in React: Lessons Learned</title>
      <dc:creator>TusharIbtekar</dc:creator>
      <pubDate>Tue, 05 Aug 2025 18:16:37 +0000</pubDate>
      <link>https://forem.com/ibtekar/building-a-high-performance-real-time-chart-in-react-lessons-learned-ij7</link>
      <guid>https://forem.com/ibtekar/building-a-high-performance-real-time-chart-in-react-lessons-learned-ij7</guid>
      <description>&lt;p&gt;Real-time data visualization is tricky. While many tutorials show you how to create basic charts, they often skip over the challenges of handling continuous data streams efficiently. Here's how I built a production-ready solution that handles thousands of data points while maintaining smooth performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Common Pitfalls
&lt;/h2&gt;

&lt;p&gt;Most basic real-time chart implementations look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const BasicChart = () =&amp;gt; {
  const [data, setData] = useState([]);

  useEffect(() =&amp;gt; {
    // DON'T DO THIS
    websocket.on('newData', (value) =&amp;gt; {
      setData(prev =&amp;gt; [...prev, value]);
      chartRef.current?.update();
    });
  }, []);

  return &amp;lt;Line data={data} /&amp;gt;;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach has several problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every data point triggers a re-render&lt;/li&gt;
&lt;li&gt;Memory usage grows indefinitely&lt;/li&gt;
&lt;li&gt;Chart animations cause jank&lt;/li&gt;
&lt;li&gt;CPU usage spikes with frequent updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Better Architecture
&lt;/h2&gt;

&lt;p&gt;The actual implementation uses a combination of WebSocket updates and REST API calls for initial data loading. This is a generalized version.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Smart Data Management
&lt;/h4&gt;

&lt;p&gt;First, let's define our constants and types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// constants.ts
export const CHART_CONFIG = {
  UPDATE_DELAY: 100,        // Debounce delay in ms
  INITIAL_RANGE: 600000,    // Initial time window (10 minutes)
  POINT_LIMIT: 1000,        // Maximum points to fetch initially
  POINT_THRESHOLD: 3000,    // Threshold before data cleanup
  CHANGE_THRESHOLD: 1       // Minimum change to record new point
};

interface TimeseriesData {
  metric: string;
  values: number[];
  timestamps: number[];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Intelligent Data Processing
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;processNewDataPoint&lt;/code&gt; function handles both new WebSocket data and existing metrics&lt;br&gt;
Here's how we handle new data points:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const processNewDataPoint = (
  existingData: TimeseriesData,
  newValue: number
): void =&amp;gt; {
  const lastValue = existingData.values[existingData.values.length - 1];

  // Only record significant changes
  const hasSignificantChange = Math.abs(newValue - lastValue) &amp;gt; CHART_CONFIG.CHANGE_THRESHOLD;

  if (hasSignificantChange) {
    existingData.values.push(newValue);
    existingData.timestamps.push(Date.now());
  } else {
    // Update the last timestamp without adding duplicate data
    existingData.timestamps[existingData.timestamps.length - 1] = Date.now();
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Optimized Chart Component
&lt;/h4&gt;

&lt;p&gt;The chart updates are optimized using &lt;code&gt;chartRef.current?.update('none')&lt;/code&gt; to skip animations. &lt;br&gt;
   Here's our main chart component with performance optimizations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const RealTimeChart: FC&amp;lt;RealTimeChartProps&amp;gt; = ({ metrics, dataSource }) =&amp;gt; {
  const chartRef = useRef&amp;lt;ChartJS&amp;gt;();
  const [activeMetrics, setActiveMetrics] = useState&amp;lt;string[]&amp;gt;([]);

  // Debounced chart update
  const updateChart = useCallback(
    debounce(() =&amp;gt; {
      chartRef.current?.update('none');
    }, CHART_CONFIG.UPDATE_DELAY),
    []
  );

  // Cleanup the chart instance and debounced function when the component unmounts
    useEffect(() =&amp;gt; {
        return () =&amp;gt; {
            // Cleanup the chart instance
            if (chartRef.current) {
                chartRef.current.destroy();
            }
            // Cleanup the debounced function
            updateChart.cancel();
        };
    }, [updateChart]);

  useEffect(() =&amp;gt; {
    if (!dataSource) return;

    const handleNewData = (newData: MetricUpdate) =&amp;gt; {
      // Process each metric
      metrics.forEach(metric =&amp;gt; {
        if (!activeMetrics.includes(metric.id)) return;

        processNewDataPoint(metric, newData[metric.id]);
      });

      // Check if we need to clean up old data
      const shouldCleanup = metrics.some(
        metric =&amp;gt; metric.values.length &amp;gt; CHART_CONFIG.POINT_THRESHOLD
      );

      if (shouldCleanup) {
        cleanupHistoricalData();
      }

      updateChart();
    };

    dataSource.subscribe(handleNewData);
    return () =&amp;gt; dataSource.unsubscribe(handleNewData);
  }, [metrics, activeMetrics, dataSource]);

  return (
    &amp;lt;ChartContainer&amp;gt;
      &amp;lt;Line
        ref={chartRef}
        data={getChartData(metrics, activeMetrics)}
        options={getChartOptions()}
      /&amp;gt;
      &amp;lt;MetricSelector
        metrics={metrics}
        active={activeMetrics}
        onChange={setActiveMetrics}
      /&amp;gt;
    &amp;lt;/ChartContainer&amp;gt;
  );
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Performance-Optimized
&lt;/h4&gt;

&lt;p&gt;Chart Configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const getChartOptions = (): ChartOptions&amp;lt;'line'&amp;gt; =&amp;gt; ({
  responsive: true,
  maintainAspectRatio: false,

  // Optimize animations
  animation: false,

  scales: {
    x: {
      type: 'time',
      time: {
        unit: 'minute',
        displayFormats: {
          minute: 'HH:mm'
        }
      },
      // Optimize tick display
      ticks: {
        maxTicksLimit: 8,
        source: 'auto'
      }
    },
    y: {
      type: 'linear',
      // Optimize grid lines
      grid: {
        drawBorder: false,
        drawTicks: false
      }
    }
  },

  elements: {
    // Disable points for performance
    point: {
      radius: 0
    },
    line: {
      tension: 0.3,
      borderWidth: 2
    }
  },

  plugins: {
    // Optimize tooltips
    tooltip: {
      animation: false,
      mode: 'nearest',
      intersect: false
    }
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Optimizations Explained
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1. Selective Data Recording
&lt;/h4&gt;

&lt;p&gt;Instead of recording every data point, we only store values when there's a significant change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduces memory usage&lt;/li&gt;
&lt;li&gt;Maintains visual accuracy&lt;/li&gt;
&lt;li&gt;Improves processing performance&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Efficient Updates
&lt;/h4&gt;

&lt;p&gt;The debounced update pattern prevents excessive renders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Groups multiple data updates into a single render&lt;/li&gt;
&lt;li&gt;Reduces CPU usage&lt;/li&gt;
&lt;li&gt;Maintains smooth animations&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Data Cleanup
&lt;/h4&gt;

&lt;p&gt;Implementing a point threshold system prevents memory issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitors total data points&lt;/li&gt;
&lt;li&gt;Triggers cleanup when threshold is reached&lt;/li&gt;
&lt;li&gt;Maintains consistent performance over time&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Chart.js Optimizations
&lt;/h4&gt;

&lt;p&gt;Several Chart.js-specific optimizations improve performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disabled point rendering for smoother lines&lt;/li&gt;
&lt;li&gt;Removed unnecessary animations&lt;/li&gt;
&lt;li&gt;Optimized tooltip interactions&lt;/li&gt;
&lt;li&gt;Reduced tick density&lt;/li&gt;
&lt;li&gt;Simplified grid lines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;This implementation has several advantages over simpler approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory Efficient: Only stores necessary data points&lt;/li&gt;
&lt;li&gt;CPU Friendly: Minimizes renders and calculations&lt;/li&gt;
&lt;li&gt;Smooth Updates: No visual jank during updates&lt;/li&gt;
&lt;li&gt;Scale-Ready: Handles thousands of points efficiently&lt;/li&gt;
&lt;li&gt;User Friendly: Maintains responsive interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future Optimizations
&lt;/h2&gt;

&lt;p&gt;While the current implementation is well-optimized for typical use cases, there are several potential future enhancements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Web Workers Integration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Offload data processing to a separate thread&lt;/li&gt;
&lt;li&gt;Improve main thread performance for larger datasets&lt;/li&gt;
&lt;li&gt;Enable more complex data transformations without UI impact&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Progressive Loading&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement virtual scrolling for historical data&lt;/li&gt;
&lt;li&gt;Load data chunks based on viewport&lt;/li&gt;
&lt;li&gt;Improve initial load performance&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building an efficient real-time chart requires careful consideration of data management, render optimization, and user experience. While the implementation is more complex than basic examples, the benefits in performance and reliability make it worthwhile for production applications.&lt;br&gt;
The key is finding the right balance between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update frequency vs. performance&lt;/li&gt;
&lt;li&gt;Data accuracy vs. memory usage&lt;/li&gt;
&lt;li&gt;Visual quality vs. render speed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This solution provides a solid foundation that can be adapted for various real-time visualization needs while maintaining excellent performance characteristics.&lt;/p&gt;

&lt;p&gt;Any suggestions for further improvements would be appreciated 😀&lt;/p&gt;

</description>
      <category>react</category>
      <category>typescript</category>
      <category>websocket</category>
      <category>chartjs</category>
    </item>
    <item>
      <title>How a Compiler Works: A Simple Breakdown</title>
      <dc:creator>TusharIbtekar</dc:creator>
      <pubDate>Mon, 09 Sep 2024 16:49:29 +0000</pubDate>
      <link>https://forem.com/ibtekar/how-a-compiler-works-a-simple-breakdown-okb</link>
      <guid>https://forem.com/ibtekar/how-a-compiler-works-a-simple-breakdown-okb</guid>
      <description>&lt;p&gt;Ever wondered how your code gets converted into something a computer can actually run? That's where a compiler comes in! Think of it like a translator, turning your high-level code (which humans understand) into machine code (which computers understand).&lt;/p&gt;

&lt;p&gt;In this blog, we'll walk through the stages of a compiler with a simple example. By the end, you'll have a clearer picture of what's going on under the hood when you hit "run."&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Example Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x = 10 + y * z;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a basic expression that assigns a value to &lt;code&gt;x&lt;/code&gt;. But before &lt;code&gt;x&lt;/code&gt; can hold the result, the compiler must break this down step-by-step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvoazi6g4aubz5ko2fkb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvoazi6g4aubz5ko2fkb7.png" alt="Image description" width="800" height="1823"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Steps of a Compiler
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lexical Analysis&lt;/strong&gt;&lt;br&gt;
The compiler reads your code and breaks it into tokens - basic units like keywords, variables, and operators. For example, in &lt;code&gt;x = 10 + y * z;&lt;/code&gt;, the tokens are &lt;code&gt;x&lt;/code&gt;, &lt;code&gt;=&lt;/code&gt;, &lt;code&gt;10&lt;/code&gt;, &lt;code&gt;+&lt;/code&gt;, &lt;code&gt;y&lt;/code&gt;, &lt;code&gt;*&lt;/code&gt;, &lt;code&gt;z&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parsing&lt;/strong&gt;&lt;br&gt;
The tokens are organized into an Abstract Syntax Tree (AST), showing the structure and order of operations. This tree represents how the operations in your code are related.&lt;br&gt;
The compiler creates a symbol table to track variables and their details. It keeps track of which variables exist and their types.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Semantic Analysis&lt;/strong&gt;&lt;br&gt;
The compiler checks the AST for logical correctness. It ensures variables are declared and used properly, and that operations are valid, updating the symbol table as necessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intermediate Representation (IR)&lt;/strong&gt;&lt;br&gt;
The AST is converted into an Intermediate Representation (IR). This is a simplified version of your code, breaking down complex operations into more manageable steps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimizer&lt;/strong&gt;&lt;br&gt;
The Optimizer tries to optimize the intermediate generated code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Generation&lt;/strong&gt;&lt;br&gt;
Finally, the compiler translates the IR into machine code or assembly code that the CPU can execute. This low-level code consists of direct instructions for the computer's hardware.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Wrapping Up
&lt;/h3&gt;

&lt;p&gt;The process - from tokenizing your code to generating machine code - ensures your program is correctly interpreted and executed by the computer. Each step plays a crucial role in transforming your high-level instructions into something the machine can understand.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
