<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tensorlake</title>
    <description>The latest articles on Forem by Tensorlake (@tensorlake).</description>
    <link>https://forem.com/tensorlake</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tensorlake"/>
    <language>en</language>
    <item>
      <title>Everything you need to know about OpenAI GPT-5.4 ✌️</title>
      <dc:creator>Shrijal Acharya</dc:creator>
      <pubDate>Sat, 21 Mar 2026 14:08:05 +0000</pubDate>
      <link>https://forem.com/tensorlake/everything-you-need-to-know-about-openai-gpt-54-3lgm</link>
      <guid>https://forem.com/tensorlake/everything-you-need-to-know-about-openai-gpt-54-3lgm</guid>
      <description>&lt;p&gt;OpenAI’s new GPT-5.4 is here, and on paper at least, it looks like one of their strongest all-rounder models so far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2rrzzlrqx2wc2szp0do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2rrzzlrqx2wc2szp0do.png" alt="GPT 5.4 release blog"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;In this article, we take a quick look at OpenAI GPT-5.4, go through its official benchmarks, and then compare it in one small coding task against Anthropic’s general-purpose model, Claude Sonnet 4.6, to see how it actually performs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We briefly go over what GPT-5.4 is, what OpenAI is claiming with this model, and why it looks like one of their strongest all-rounder releases so far.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;We look at the official benchmarks around coding, reasoning, tool use, and computer-use capabilities to get an idea of how strong the model looks on paper.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;Instead of relying only on benchmarks, we also compare GPT-5.4 against Claude Sonnet 4.6 in one small, quick coding task (not enough to judge fully, but still...).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Brief on OpenAI GPT-5.4
&lt;/h2&gt;

&lt;p&gt;So, before we jump into the coding test, let me give you a quick brief on GPT-5.4, because this is one of OpenAI’s biggest model releases in a while.&lt;/p&gt;

&lt;p&gt;OpenAI released GPT-5.4 on March 5, 2026, and they are positioning it as their most capable and efficient frontier model for professional work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjgdtgurct57gfvfsblk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjgdtgurct57gfvfsblk.png" alt="OpenAI claiming gpt 5.4 is good at frontend"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What makes this model interesting is that OpenAI is not selling it as just a coding model, and not just a reasoning model either. They are basically pitching it as an &lt;strong&gt;all-round professional work&lt;/strong&gt; model that combines strong reasoning, strong coding, better tool use, and much better performance on practical work like spreadsheets, presentations, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfmlidftq907s3rk2c1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfmlidftq907s3rk2c1l.png" alt="Sam Altman claiming the model is good at real life tasks like working with spreadsheets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Honestly, this part matters more than it sounds. A lot of real AI work is not just prompting or writing code, it is dealing with PDFs, spreadsheets, slides, and all kinds of unstructured data. That is also where something like &lt;a href="https://tensorlake.ai" rel="noopener noreferrer"&gt;Tensorlake&lt;/a&gt; makes sense, because it helps turn that mess into something models can actually work with.&lt;/p&gt;

&lt;p&gt;And the specs are also pretty wild. GPT-5.4 supports a &lt;strong&gt;1.05M token&lt;/strong&gt; context window with 128K max output tokens, which is pretty good room to work with. All in all, this helps the model remember things better. Also, a thing to note is that the knowledge cutoff for this model is &lt;strong&gt;August 31, 2025&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, let's talk about the part we mostly care about.&lt;/p&gt;

&lt;p&gt;On the official OpenAI benchmarks, &lt;strong&gt;GPT-5.4 scores 57.7% on SWE-Bench Pro (Public)&lt;/strong&gt;, which puts it basically side by side with GPT-5.3-Codex, a coding-focused model, at &lt;strong&gt;56.8%&lt;/strong&gt;. So yes, OpenAI says this general-purpose model is slightly better than GPT-5.3-Codex, a coding-focused model, which I personally have not had the best experience with compared to Claude models, and that is kind of wild to think about.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6yjg9ftpq3vexntmndl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6yjg9ftpq3vexntmndl.png" alt="gpt 5.4 benchmark"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenAI says GPT-5.4 is their &lt;strong&gt;first general-purpose model with native computer-use capabilities&lt;/strong&gt;, which is a pretty big deal. That means it is built not just to generate text or code, but also to operate across software, work from screenshots, and handle more agent-like workflows. On &lt;strong&gt;OSWorld-Verified&lt;/strong&gt;, it scores &lt;strong&gt;75.0%&lt;/strong&gt;, which OpenAI says is above human performance on that benchmark. 🤯&lt;/p&gt;

&lt;p&gt;One thing I also like here is that OpenAI is claiming GPT-5.4 is their &lt;strong&gt;most factual model yet&lt;/strong&gt;. It is said to be 18% less likely to contain any errors compared to GPT-5.2.&lt;/p&gt;

&lt;p&gt;For API developers, pricing matters, of course.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5667dzgr6es7701tau2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5667dzgr6es7701tau2.png" alt="gpt 5.4 pricing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The standard &lt;strong&gt;GPT-5.4&lt;/strong&gt; model is listed at &lt;strong&gt;$2.50 per 1M input tokens&lt;/strong&gt;, &lt;strong&gt;$0.25 cached input&lt;/strong&gt;, and &lt;strong&gt;$15 per 1M output tokens&lt;/strong&gt;. &lt;strong&gt;GPT-5.4 Pro&lt;/strong&gt; is way more expensive at &lt;strong&gt;$30 input&lt;/strong&gt; and &lt;strong&gt;$180 output per 1M tokens&lt;/strong&gt;, and OpenAI says it can take several minutes on hard tasks, so that one is clearly for cases where you really want the best answer and are okay paying for it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💁 The normal GPT-5.4 model is probably the one most people will actually care about day to day, and that's what I'd prefer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And as always, benchmarks are benchmarks. But on paper at least, GPT-5.4 looks like one of the strongest all-rounder models OpenAI has shipped so far.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Coding Test
&lt;/h2&gt;

&lt;p&gt;As this is a general-purpose model instead of a coding-tuned model, comparing the model's ability solely on coding is just not fair. But as developers, we mostly care about how good the model is at coding anyway, so just to give you an idea of how this model performs, we will do a quick test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l1l9xllt29e4rzqakqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l1l9xllt29e4rzqakqa.png" alt="gpt 5.4 benchmark compared to 5.3 codex"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, there's not much difference in SWE-Bench between GPT-5.4 and GPT-5.3-Codex:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPT-5.4&lt;/strong&gt;: Latency (s): 1,053, Accuracy: 57.7%, Effort: xhigh&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-5.3-Codex&lt;/strong&gt;: Latency (s): 1,114, Accuracy: 57.2%, Effort: xhigh&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But to give you an idea of what to expect from this model in coding, I will run one small, quick test.&lt;/p&gt;

&lt;p&gt;Let's take two general models, one from Anthropic, Claude Sonnet 4.6, and one from OpenAI, GPT-5.4, &lt;strong&gt;not pro&lt;/strong&gt;, and compare them against each other to show the difference in their coding skills.&lt;/p&gt;

&lt;p&gt;For the test, we will use the following CLI coding agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Sonnet 4.6:&lt;/strong&gt; Claude Code (Anthropic’s terminal-based agentic coding tool)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI GPT-5.4:&lt;/strong&gt; Codex CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As GPT-5.4 is said to be strong in frontend, why not test it on frontend itself?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3mnuibaxl0c6acx4npd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3mnuibaxl0c6acx4npd.png" alt="gpt 5.4 frontend claim"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Test: Figma Design Clone with MCP
&lt;/h3&gt;

&lt;p&gt;In this test, we'll be comparing both models on a Figma design, a complex dashboard with so many things happening in the UI.&lt;/p&gt;

&lt;p&gt;Here's the Figma design that I'll ask both models to clone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Prompt:

Build a &lt;span class="gs"&gt;**pixel-accurate clone**&lt;/span&gt; of the attached Figma design frame using the &lt;span class="gs"&gt;**provided Next.js project**&lt;/span&gt; as the starting point. Do &lt;span class="gs"&gt;**not**&lt;/span&gt; create a new project. Instead, implement the UI inside the existing codebase.

https://www.figma.com/design/8quNKljV0spv67VAGsA75D/Dashboard-Design-Concept--Community---Copy-?node-id=69-123&amp;amp;t=Tvu2UB7UDMqkvPRb-4

Please match the design as closely as possible, with close attention to layout, spacing, alignment, typography, colors, borders, shadows, corner radius, and overall visual balance.

Requirements:
&lt;span class="p"&gt;
*&lt;/span&gt; use the existing &lt;span class="gs"&gt;**Next.js**&lt;/span&gt; setup
&lt;span class="p"&gt;*&lt;/span&gt; keep the code clean and componentized
&lt;span class="p"&gt;*&lt;/span&gt; make the page responsive without changing the intended design
&lt;span class="p"&gt;*&lt;/span&gt; use semantic HTML where appropriate
&lt;span class="p"&gt;*&lt;/span&gt; avoid adding your own design decisions unless necessary
&lt;span class="p"&gt;*&lt;/span&gt; if any part of the design is unclear, make the most reasonable choice and stay visually consistent

Prioritize &lt;span class="gs"&gt;**design accuracy first**&lt;/span&gt;, then code quality.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  GPT-5.4
&lt;/h4&gt;

&lt;p&gt;GPT-5.4 pretty much one-shotted the entire implementation in one go, which was honestly nice to see. It did not need any follow-up prompt, no fixing, nothing. It just took the Figma frame through MCP and started building the whole thing right away.&lt;/p&gt;

&lt;p&gt;The final result actually looked decent. I would not call it pixel-perfect by any means, but compared to Claude Sonnet 4.6, I’d say the implementation looked noticeably better overall. The whole thing feels more like a static picture of the design than an interface you can actually interact with.&lt;/p&gt;

&lt;p&gt;Time-wise, it took roughly &lt;strong&gt;5 minutes&lt;/strong&gt; to get to a working to the working build.&lt;/p&gt;

&lt;p&gt;Here’s the demo:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/4yxzh0qxm5c"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/f6edd67c32037c0a69def1b10985855d" rel="noopener noreferrer"&gt;GPT-5.4 Code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Token usage looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total Token Usage:&lt;/strong&gt; 166,501&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input Token Usage:&lt;/strong&gt; 151,595&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cached Input Tokens:&lt;/strong&gt; 1,291,776&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Token Usage:&lt;/strong&gt; 14,906&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning Tokens:&lt;/strong&gt; 1,479&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the following code changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; 3 files changed, 803 insertions(+), 82 deletions(-)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be honest, I still would not say this is the kind of code implementation you can just ship straight to production and call it done. But for a one-shot frontend clone from a Figma frame, this was a pretty solid attempt.&lt;/p&gt;
&lt;h4&gt;
  
  
  Claude Sonnet 4.6
&lt;/h4&gt;

&lt;p&gt;Claude Sonnet 4.6 went straight into the implementation right away. It did run into an issue at first, not really a build error, but more of one of those annoying &lt;strong&gt;Next.js image gotchas&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kdi2h66gcdz807p0kom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kdi2h66gcdz807p0kom.png" alt="claude sonnet 4.6 image impl error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, I gave it a quick follow-up prompt, and almost instantly, it fixed the issue and came back with a decent implementation.&lt;/p&gt;

&lt;p&gt;As you’d expect, it did manage to clone the project structure and get the UI in place. And again, the same issue, there's just no functionality whatsoever. It just feels like a picture with no interactivity.&lt;/p&gt;

&lt;p&gt;Here’s the demo:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/L9l8cGBvC1U"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/61d485a452f2aab8eb41ceaa31ddd9f9" rel="noopener noreferrer"&gt;Claude Sonnet 4.6 Code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time-wise, it took &lt;strong&gt;9 minutes 56 seconds&lt;/strong&gt; to get to a working result, and the follow-up fix was pretty much instant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0ugjnobb005rsla4bnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0ugjnobb005rsla4bnr.png" alt="implementation checklist"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Token usage, based on Claude Code’s model stats, looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input Token Usage:&lt;/strong&gt; 84&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Token Usage:&lt;/strong&gt; 35.4K&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgof5gb4r1ufiy1vpwpa4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgof5gb4r1ufiy1vpwpa4.png" alt="token usage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the following code changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; 10 files changed, 1017 insertions(+), 84 deletions(-)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be honest, I’m not really impressed, but I’m not disappointed either. The result feels pretty neutral overall. It was able to use tools, get fairly close to the UI, and produce something usable for comparison, but the implementation itself feels a bit weird and not all that convincing.&lt;/p&gt;


&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So, after all the benchmarks, claims, and hype, I think the fairest takeaway is this: GPT-5.4 looks very strong on paper, and for a lot of people it works and is an upgrade, but it still doesn’t seem like it is the best model you can get for coding.&lt;/p&gt;

&lt;p&gt;So yeah, I’d say GPT-5.4 is probably one of the strongest all-rounder models OpenAI has shipped so far, but whether it beats Claude, be it Sonnet or Opus, for coding in real usage is still something you’ll want to judge from your actual hands-on testing, not just benchmarks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t6hnxig38el1bsey06l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t6hnxig38el1bsey06l.gif" alt="slect random gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And honestly, that’s the real takeaway here anyway.&lt;/p&gt;

&lt;p&gt;These models keep getting better at a speed that is honestly hard to keep up with. So rather than getting too stuck on who won one benchmark, the better thing to do is probably to keep building, keep testing, and keep learning how to use these models better for your use case.&lt;/p&gt;

&lt;p&gt;What do you think, is GPT-5.4 actually that good, or is Claude still your go-to? 👇&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__1127015"&gt;
    &lt;a href="/shricodev" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1127015%2F1c5e48a2-f602-4e7d-8312-3c0322d155c6.jpg" alt="shricodev image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/shricodev"&gt;Shrijal Acharya&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/shricodev"&gt;Full Stack SDE • Open-Source Contributor • Collaborator @Oppia • Mail for collaboration&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>🔥Claude Opus 4.6 vs. Sonnet 4.6 Coding Comparison ✅</title>
      <dc:creator>Shrijal Acharya</dc:creator>
      <pubDate>Thu, 05 Mar 2026 14:04:59 +0000</pubDate>
      <link>https://forem.com/tensorlake/claude-opus-46-vs-sonnet-46-coding-comparison-55jn</link>
      <guid>https://forem.com/tensorlake/claude-opus-46-vs-sonnet-46-coding-comparison-55jn</guid>
      <description>&lt;p&gt;Anthropic recently dropped the updated &lt;strong&gt;Claude 4.6&lt;/strong&gt; lineup, and as usual, the two names everyone cares about are &lt;strong&gt;Opus 4.6&lt;/strong&gt; and &lt;strong&gt;Sonnet 4.6&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Opus is the expensive “best possible” model, and Sonnet is the cheaper, more general one that a lot of people actually use day to day. So I wanted to see what the real gap looks like when you ask both to build something serious, not a toy demo.&lt;/p&gt;

&lt;p&gt;Benchmark-wise, there’s a difference of course, but it doesn’t look that huge when it comes to SWE and agentic coding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumytppa0wbbydq6y6oxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumytppa0wbbydq6y6oxq.png" alt="Claude Opus 4.6 vs. Claude Sonnet 4.6 Benchmark comparison"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I kept it super basic: one test (but a big one), same prompt, same workflow. I just compared how close they got without me stepping in.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;NOTE:&lt;/strong&gt; Don’t take the result of this test as a hard rule. This is just one real-world coding task, run in my setup, to give you a feel for how these two models performed for me.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;If you just want the takeaway, here’s the deal with these models:&lt;/p&gt;

&lt;p&gt;First, &lt;strong&gt;Opus 4.6 is the peak for coding right now&lt;/strong&gt;. At the time of writing, it’s basically the OG, and nothing else comes that close.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus 4.6&lt;/strong&gt; had a cleaner run. It hit a test failure too, but fixed it fast, shipped a working CLI + Tensorlake integration, and did it with way fewer tokens. Rough API-equivalent cost (output only) came out around ~$1.00, which is kind of wild for how big the project is.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Sonnet 4.6&lt;/strong&gt; Surprisingly close for a cheaper, more general model. It built most of the project and the CLI was mostly fine, but it ran into the same issue as Opus and couldn’t fully recover. Even after an attempted fix, Tensorlake integration still didn’t work. Output-only cost was about ~$0.87, but it used way more time and tokens overall to get there.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Obviously, this isn’t a test to “compare” the two head-to-head. It’s just to see the difference in code quality. In general, there’s never really been a fair comparison between Opus and Sonnet since their very first launch, Opus has always been on another level.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Test Workflow
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;NOTE:&lt;/strong&gt; Before we start this test, I just want to clarify one thing. I'm not doing this test to compare whether Sonnet 4.6 is better than Opus 4.6 for coding, because obviously Opus 4.6 is a lot better. This is to give you an idea of how well Opus 4.6 performs compared to Sonnet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For the test, we will use everyone's favorite CLI coding agent, &lt;strong&gt;Claude Code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As both models are from Anthropic, it works best for both and is &lt;strong&gt;not biased&lt;/strong&gt; toward either.&lt;/p&gt;

&lt;p&gt;We will test both models on one decently complex task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task:&lt;/strong&gt; Build a complete Tensorlake project in Python called &lt;code&gt;research_pack&lt;/code&gt;, a “Deep Research Pack” generator that turns a topic into:&lt;/li&gt;
&lt;li&gt;a citation-backed &lt;strong&gt;Markdown report&lt;/strong&gt;, and&lt;/li&gt;
&lt;li&gt;a machine-readable &lt;strong&gt;source library JSON&lt;/strong&gt; with extracted text, metadata, summaries, you get the idea.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also has to ship a nice CLI called &lt;strong&gt;&lt;code&gt;research-pack&lt;/code&gt;&lt;/strong&gt; with commands like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;research-pack run "&amp;lt;topic&amp;gt;"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;research-pack status &amp;lt;run_id&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;research-pack open &amp;lt;run_id&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll compare the overall feel, code quality, token usage, cost, and time to complete the build.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;NOTE:&lt;/strong&gt; Just like my previous tests, I’ll share each model’s changes as a &lt;code&gt;.patch&lt;/code&gt; file so you can reproduce the exact result locally with &lt;code&gt;git apply &amp;lt;file.patch&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Why Tensorlake?
&lt;/h3&gt;

&lt;p&gt;Tensorlake is a solid choice for this Opus 4.6 vs Sonnet 4.6 test because it is a real platform with enough complexity to quickly show whether a model can actually build something end to end. It has an agent runtime with durable execution, sandboxed code execution, and built in observability, so the test is not just writing a few functions, it is wiring up a production workflow.&lt;/p&gt;

&lt;p&gt;And selfishly, it is also a good dogfood moment. 👀 If a model can spin up a Tensorlake project from scratch and get it working, that is a pretty strong sign for two things: these recent models are getting scary good and how usable Tensorlake is for building serious agent style pipelines.&lt;/p&gt;




&lt;h2&gt;
  
  
  Coding Tests
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Test: Deep Research Agent
&lt;/h3&gt;

&lt;p&gt;For this test, both models had to build the &lt;code&gt;research_pack&lt;/code&gt; Tensorlake project in Python. The goal was simple: give it a topic, it crawls stuff, figures out sources, improves them, and spits out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;report.md&lt;/code&gt; with &lt;code&gt;[S1]&lt;/code&gt; style citations&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;library.json&lt;/code&gt; with the full source library&lt;/li&gt;
&lt;li&gt;a clean CLI: &lt;code&gt;research-pack run/status/open&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;plus Tensorlake deploy support so you can trigger it as an app, not just locally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the prompt I’ve used here: &lt;a href="https://gist.github.com/shricodev/4a47d65ec12229bdfda2b836b226eb50" rel="noopener noreferrer"&gt;Research Agent Prompt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing that went a bit crazy is that both models ran into basically the &lt;strong&gt;exact same/similar issue&lt;/strong&gt; during the run.&lt;/p&gt;

&lt;p&gt;That shows how similarly these models can behave, which is kind of creepy. If you give them the exact same task and constraints, they’ll often make similar choices. I wanted to call that out because you might’ve noticed the same pattern too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqnz7blm1i18d4ypxg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqnz7blm1i18d4ypxg5.png" alt="AI models behaving similarly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not surprisingly, &lt;strong&gt;Opus fixed it much faster and with way fewer tokens&lt;/strong&gt;. Sonnet took longer, burned a lot more context trying to debug it, and even after the fix pass, it still didn’t fully work.&lt;/p&gt;




&lt;h3&gt;
  
  
  Claude Opus 4.6
&lt;/h3&gt;

&lt;p&gt;Opus was pretty straightforward.&lt;/p&gt;

&lt;p&gt;It did hit a failure while running tests, but it was a quick fix. After that, everything looked clean: CLI worked, offline mode worked, and overall all the feature flags seem to work perfectly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatt1ijsaq7uy4d380p2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatt1ijsaq7uy4d380p2o.png" alt="Opus 4.6 project build error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the acceptance checklist it generated at the end, I really love it as it created this after making sure all tests pass, and everything is in place, that's how it's done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqzh8pvcjr55pcoomiyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqzh8pvcjr55pcoomiyp.png" alt="Opus 4.6 generating checklist of work done"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's the demo of the working CLI:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The API key visible in the below demo videos has been revoked. Please don’t try to use it.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/Xl_bAuPbVLg"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;...and how it integrates with Tensorlake:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/vzcNRkwQPAM"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here in a patch file: &lt;a href="https://github.com/tensorlakeai/tensorlake-website/tree/main/research-pack/research_pack" rel="noopener noreferrer"&gt;Opus 4.6 Patch file&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; ~$1.001&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;NOTE:&lt;/strong&gt; As I'm using a Claude plan and not on API usage, this is roughly calculated based on the input/output tokens.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; 20 minutes 6 seconds + ~1 min 40 sec for the fix&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Token Usage:&lt;/strong&gt; 33.2K + ~4K for the fix&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; 156 files changed, 95013 insertions(+)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ℹ️ You can see the complexity of the project for yourself, and you’ll probably be shocked at how good these models have gotten. It’s no longer just boilerplate or small refactors. They can build a complete, end-to-end project from scratch from a single prompt. We’re officially in the real AI era.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Claude Sonnet 4.6
&lt;/h3&gt;

&lt;p&gt;Sonnet was… close, but not quite as clean as Opus.&lt;/p&gt;

&lt;p&gt;Just like Opus, it ran into a test failure during the run. This is one of those things you’ll notice with similar models: same prompt, same codebase, and they sometimes hit the exact similar weird issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4m06o4om4xy8h0n9avap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4m06o4om4xy8h0n9avap.png" alt="Claude Sonnet 4.6 project build error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the demo of the CLI (you’ll see it mostly working, but there are some rough edges) and not as well implemented as Opus:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/A_4ZiT30pGs"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;...and how it integrates with Tensorlake:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/kzzzrobQ15I"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;It's not working as you can see. Sonnet did attempt a fix, but still couldn't get to a working state with Tensorlake. But overall, it was super close.&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://github.com/tensorlakeai/tensorlake-website/tree/main/research-pack-sonnet" rel="noopener noreferrer"&gt;Sonnet 4.6 Patch&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; ~$0.87&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ Same as Opus 4.6, this is an approximate cost based on the input/output tokens.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; 33 minutes 48 seconds + ~3m 18s for the attempted fix&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Token Usage:&lt;/strong&gt; 52.9K + ~5K for the fix (didn't work)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; 88 files changed, 23253 insertions(+)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;🤷‍♂️ I can’t really complain about Sonnet’s performance, other than this one issue. It still got almost everything working. And to be fair, Sonnet isn’t Anthropic’s flagship coding model like Opus. It’s more of a general-purpose model, and Opus also comes with a pretty big cost difference, so the gap in code quality is kind of expected.&lt;/p&gt;

&lt;p&gt;And please don’t try using the API keys shown in the video, as it’s already revoked.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Opus as a lineup is just too good. If you want an end-to-end product that works most of the time with minimal hand-holding, go with Opus. If you want something cheaper, and you’re okay finishing the last bit yourself, Sonnet is still solid.&lt;/p&gt;

&lt;p&gt;Even in this one test, you can already see the gap in implementation quality, token usage, and time spent.&lt;/p&gt;

&lt;p&gt;And if Anthropic can cut Opus to half its price, or even get it close to Sonnet’s, it’d be over for most other models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjd3t2007csw2j79ko0e.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjd3t2007csw2j79ko0e.gif" alt="Shocked GIF"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For me, the best way to use these models is still the same: let them build most of it fast, then run it, test it, and clean up the rough parts yourself.&lt;/p&gt;

&lt;p&gt;Let me know your thoughts in the comments. ✌️&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>python</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The End of Database-Backed Workflow Engines: Building GraphRAG on Object Storage</title>
      <dc:creator>Diptanu Gon Choudhury</dc:creator>
      <pubDate>Tue, 03 Feb 2026 14:09:26 +0000</pubDate>
      <link>https://forem.com/tensorlake/the-end-of-database-backed-workflow-engines-building-graphrag-on-object-storage-295b</link>
      <guid>https://forem.com/tensorlake/the-end-of-database-backed-workflow-engines-building-graphrag-on-object-storage-295b</guid>
      <description>&lt;p&gt;GraphRAG sounds elegant in theory: build a knowledge graph from your documents, traverse it intelligently, and get better answers than vanilla RAG.&lt;/p&gt;

&lt;p&gt;Then you look at the compute requirements.&lt;/p&gt;

&lt;p&gt;To build a GraphRAG system, you need to: parse documents, chunk text, generate embeddings for every chunk, extract concepts from every chunk, compute pairwise similarities, build graph edges, and store everything in a queryable format. For a single 100-page PDF, that’s thousands of API calls, millions of similarity computations, and hours of processing.&lt;/p&gt;

&lt;p&gt;Now imagine doing this for 10,000 documents. Or 100,000&lt;/p&gt;




&lt;h2&gt;
  
  
  What GraphRAG Actually Needs from Infrastructure
&lt;/h2&gt;

&lt;p&gt;The algorithm is straightforward: chunk, embed, extract concepts, build edges, traverse. The infrastructure requirements are not.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parallel execution
&lt;/h3&gt;

&lt;p&gt;‍Documents are independent. Processing them sequentially wastes time. You need a system that can spin up workers on demand and distribute work across them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Heterogeneous compute
&lt;/h3&gt;

&lt;p&gt;PDF parsing needs memory. Embedding generation is I/O-bound waiting on API calls. Concept extraction needs CPU for NLP models. Running all of these on the same machine means over-provisioning for the hungriest step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Durable execution
&lt;/h3&gt;

&lt;p&gt;A 10-hour ingestion job will fail somewhere. Network timeout. Rate limit. OOM. When step 3 fails, it needs to read step 2’s output from somewhere durable. Without checkpointing, you start over from zero.&lt;/p&gt;

&lt;h3&gt;
  
  
  Job orchestration
&lt;/h3&gt;

&lt;p&gt;You need something that spins up workers, tracks dependencies, retries failures, aggregates partial results, and decides whether to proceed or abort.&lt;/p&gt;




&lt;h2&gt;
  
  
  The DIY Stack
&lt;/h2&gt;

&lt;p&gt;Building this yourself means assembling:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5lgu8admb9gkmc8g16y.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5lgu8admb9gkmc8g16y.jpeg" alt="Self building a GraphRAG stack from scratch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes
&lt;/h3&gt;

&lt;p&gt;‍Kubernetes is used for container orchestration. But Kubernetes doesn’t know anything about your jobs. It manages containers, not computations. It won’t schedule your tasks, track dependencies, or handle fan-out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Celery + Redis
&lt;/h3&gt;

&lt;p&gt;‍Celery and Redis are typically used for task queuing. Note: queuing, not parallel execution. Celery distributes tasks to workers, but it is fundamentally a message broker with worker processes attached. It doesn’t understand data locality, can’t optimize task placement, and treats every task as independent. When you need real parallelism, map-reduce over ten thousand chunks, aggregating partial results, handling stragglers, Celery gets you partway there. For the rest, you end up writing glue code or reaching for Spark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spark
&lt;/h3&gt;

&lt;p&gt;‍Spark is brought in for actual parallel compute. Now you are running a third system. Spark knows how to partition data, schedule parallel tasks, and aggregate results. But Spark wants to own the entire pipeline. Mixing Spark jobs with Celery tasks means shuffling data between systems, managing two job lifecycles, and debugging failures that span both.&lt;/p&gt;

&lt;h3&gt;
  
  
  Postgres
&lt;/h3&gt;

&lt;p&gt;‍Postgres is used for job metadata and durability. This is the state that workflow engines like Airflow and Temporal manage  except now you are building it yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  The glue code
&lt;/h3&gt;

&lt;p&gt;‍You have a container orchestrator that doesn’t understand jobs, a task queue that doesn’t understand parallelism, and a compute engine that doesn’t integrate cleanly with either. You end up writing hundreds of lines to bridge these systems, and every bridge is a place where failures hide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz4kr4vv44kc2p3kfy41.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz4kr4vv44kc2p3kfy41.jpeg" alt="Setup time and maintainence for different components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this assumes you get it right the first time. You won't.&lt;/p&gt;

&lt;p&gt;Kubernetes was built for orchestrating long-running microservices, not bursty batch jobs. The Cluster Autoscaler checks for unschedulable pods every 10 seconds, then provisions nodes that take 30-60 seconds to come online. For a GraphRAG pipeline that needs to fan out to 500 workers immediately, that's minutes of latency before work even starts. The autoscaler &lt;a href="https://scaleops.com/blog/kubernetes-cluster-autoscaler-best-practices-limitations-alternatives/" rel="noopener noreferrer"&gt;prioritizes stability over speed&lt;/a&gt; a reasonable tradeoff for web services, but painful for batch processing.&lt;/p&gt;

&lt;p&gt;This is why most GraphRAG implementations stay as notebooks. The infrastructure tax is too high.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Different Approach: Object-Store-Native Compute
&lt;/h2&gt;

&lt;p&gt;For the past two years, we've been quietly building a new serverless compute stack for AI workloads at &lt;a href="https://tensorlake.ai" rel="noopener noreferrer"&gt;Tensorlake&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It powers our Document Ingestion API, which processes millions of documents every month across a heterogeneous fleet of machines fully distributed, fully managed. Document processing was our testbed: OCR, layout detection, table extraction, entity recognition. Every document touches multiple models, multiple machines, multiple failure modes. If the infrastructure couldn't handle that, it couldn't handle anything.&lt;/p&gt;

&lt;p&gt;But the compute stack itself is general purpose. It replaces the entire Kubernetes + Celery + Spark + Postgres stack with a single abstraction:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write your workflow as if it runs on a single machine. In production, it gets transparently distributed across CPUs and GPUs, and scales to whatever the workload demands.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No queues to configure. No job schedulers to manage. No Spark clusters to provision. No glue code bridging systems that weren't designed to work together.&lt;/p&gt;

&lt;p&gt;The key insight: use S3 as the backbone for durable execution instead of databases. AI workloads deal in unstructured data—documents, images, embeddings, model outputs. This data already lives in object storage. By building the execution engine around S3 rather than Postgres or Cassandra, we eliminated an entire class of serialization problems and made checkpointing nearly free.&lt;/p&gt;




&lt;h2&gt;
  
  
  GraphRAG on Tensorlake
&lt;/h2&gt;

&lt;p&gt;Each step runs as an isolated function with its own compute requirements.&lt;/p&gt;

&lt;p&gt;Step-level functions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywamcjnbtqavia3kkjpr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywamcjnbtqavia3kkjpr.jpeg" alt="Step level functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72mal0bi8wshxvfgrw1s.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72mal0bi8wshxvfgrw1s.jpeg" alt="Step level functions - 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb0vxzwwlqw6qn682msc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb0vxzwwlqw6qn682msc.jpeg" alt="Step level functions - 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The magic is in &lt;code&gt;.map()&lt;/code&gt;. Fan out to thousands of workers with one line:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz152oddcbtwefyq5h2x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz152oddcbtwefyq5h2x.jpeg" alt="Step level functions - 4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execution Flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs46iqbsfwl7y0yf49wbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs46iqbsfwl7y0yf49wbg.png" alt="Execution Flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a function fails, Tensorlake doesn't re-execute successful steps - it reads the checkpointed output from S3 and continues. If the pipeline dies at chunk 847, the retry resumes from the last checkpoint, not from zero.&lt;/p&gt;

&lt;p&gt;This isn't a batch job you run manually, it's a live HTTP endpoint. Deploy once, and it's available on-demand whenever someone wants to add a document to the knowledge graph:&lt;/p&gt;

&lt;p&gt;No documents in the queue? The system scales to zero. A thousand PDFs arrive at once? Tensorlake spins up workers to handle them in parallel. You're not paying for idle clusters or babysitting Spark jobs. The infrastructure responds to the workload, not the other way around.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdcwt4qtqm81uxzivop9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdcwt4qtqm81uxzivop9.jpeg" alt="Ingest data with curl command"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Results
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zr05cwk7qtgk3lvj8uj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zr05cwk7qtgk3lvj8uj.jpeg" alt="Results of data ingestion"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i2q6cv4hfp1j5jkd6pv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i2q6cv4hfp1j5jkd6pv.jpeg" alt="Results of data ingestion - 2"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/tensorlakeai/examples/tree/main/graph-rag-pipeline

&lt;span class="nb"&gt;cd &lt;/span&gt;graph-rag-pipeline

tensorlake secrets &lt;span class="nb"&gt;set &lt;/span&gt;OPENAI_API_KEY &amp;lt;your-key&amp;gt;
tensorlake secrets &lt;span class="nb"&gt;set &lt;/span&gt;NEO4J_URI neo4j+s://xxx.databases.neo4j.io
tensorlake secrets &lt;span class="nb"&gt;set &lt;/span&gt;NEO4J_PASSWORD &amp;lt;password&amp;gt;

tensorlake deploy app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For a small proof of concept, a notebook is fine. For production GraphRAG  with retries, scale, and real users, you need infrastructure that doesn’t become the bottleneck.&lt;/p&gt;

&lt;p&gt;Built with &lt;a href="https://tensorlake.ai" rel="noopener noreferrer"&gt;Tensorlake&lt;/a&gt; and &lt;a href="https://neo4j.com" rel="noopener noreferrer"&gt;Neo4j&lt;/a&gt;. See the &lt;a href="https://arxiv.org/abs/2404.16130" rel="noopener noreferrer"&gt;GraphRAG paper&lt;/a&gt; for the original algorithm.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__3622204"&gt;
    &lt;a href="/diptanu_gonchoudhury_23e" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3622204%2F6e0be19c-56e6-49c4-a63f-8ab8cda7fe93.jpg" alt="diptanu_gonchoudhury_23e image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/diptanu_gonchoudhury_23e"&gt;Diptanu Gon Choudhury&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/diptanu_gonchoudhury_23e"&gt;/diptanu_gonchoudhury_23e&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





</description>
      <category>ai</category>
      <category>rag</category>
      <category>programming</category>
    </item>
    <item>
      <title>🔥 Claude Opus 4.5 vs GPT 5.2 High vs Gemini 3 Pro: Production Coding Test ✅</title>
      <dc:creator>Shrijal Acharya</dc:creator>
      <pubDate>Sun, 18 Jan 2026 12:41:12 +0000</pubDate>
      <link>https://forem.com/tensorlake/claude-opus-45-vs-gpt-52-high-vs-gemini-3-pro-production-coding-test-25of</link>
      <guid>https://forem.com/tensorlake/claude-opus-45-vs-gpt-52-high-vs-gemini-3-pro-production-coding-test-25of</guid>
      <description>&lt;p&gt;Okay, so right now the &lt;strong&gt;WebDev&lt;/strong&gt; leaderboard on LMArena is basically owned by the big three: Claude Opus 4.5 from &lt;strong&gt;Anthropic&lt;/strong&gt;, GPT-5.2-codex (high) from &lt;strong&gt;OpenAI&lt;/strong&gt;, and finally everybody's favorite, Gemini 3 Pro from &lt;strong&gt;Google&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltml19xef278wmy3f5y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltml19xef278wmy3f5y1.png" alt="LLMDev models ranking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, I grabbed these three and put them into the same existing project (over 8K stars and 50K+ LOC) and asked them to build a couple of real features like a normal dev would.&lt;/p&gt;

&lt;p&gt;Same repo. Same prompts. Same constraints.&lt;/p&gt;

&lt;p&gt;For each task, I took the best result out of three runs per model to keep things fair.&lt;/p&gt;

&lt;p&gt;Then I compared what they actually did: code quality, how much hand-holding they needed, and whether the feature even worked in the end.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;NOTE:&lt;/strong&gt; Don't take the result of this test as a hard rule. This is just a small set of real-world coding tasks that shows how each model did for me in that exact setup and gives you an overview of the difference in the top 3 models' performance in the same tasks.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;If you want a quick take, here’s how the three models performed in our tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus 4.5&lt;/strong&gt; was the most consistent overall. It shipped working results for both tasks, and the UI polish was the best of the three. The main downside is cost. If they find a way to achieve this performance while reducing cost, it will actually be over for most other models.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPT-5.2-codex (high)&lt;/strong&gt; was one of the best. But it's obviously slower due to the higher reasoning. When it hit, the code quality and structure were great, but it needed more patience than the other two in this repo.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 3 Pro&lt;/strong&gt; was the most efficient. Both tasks worked, but the output often felt like the minimum viable version, especially on the analytics dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 If you want the safest pick for real “ship a feature in a big repo” work, Opus 4.5 felt the most reliable in my runs. If you care about speed and cost and you’re okay polishing UI yourself, Gemini 3 Pro is a solid bet.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Test Workflow
&lt;/h2&gt;

&lt;p&gt;For the test, we will use the following CLI coding agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus 4.5:&lt;/strong&gt; Claude Code (Anthropic’s terminal-based agentic coding tool)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 3 Pro:&lt;/strong&gt; Gemini CLI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-5.2 High:&lt;/strong&gt; Codex CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the repo used for the entire test: &lt;a href="https://github.com/iib0011/omni-tools" rel="noopener noreferrer"&gt;iib0011/omni-tools&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will check the models on two different tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Task 1:&lt;/strong&gt; Add a global Action Palette (Ctrl + K)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each model is asked to create a global action menu that opens with a keyboard shortcut. This feature expands on the current search by adding actions, global state, and keyboard navigation. This task checks how well the model understands current UX patterns and avoids repetition without breaking what's already in place.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Task 2:&lt;/strong&gt; Tool Usage Analytics + Insights Dashboard&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each model had to add real usage tracking across the app, persist it locally, and then build an analytics dashboard that shows things like the most used tools, recent activity, and basic filters.&lt;/p&gt;

&lt;p&gt;We’ll compare code quality, token usage, cost, and time to complete the build.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;NOTE:&lt;/strong&gt; I will share the source code changes for each task by each model in a &lt;code&gt;.patch&lt;/code&gt; file. This way, you can easily view them on your local system by cloning the repository and applying the patch file using &lt;code&gt;git apply &amp;lt;path_file_name&amp;gt;&lt;/code&gt;. This method makes sharing changes easier.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Real-world Coding Tests
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Test 1: Add a global Action Palette (Ctrl + K)
&lt;/h3&gt;

&lt;p&gt;The task is simple: all models start from the same base commit and then follow the same prompt to build what is asked in the prompt.&lt;/p&gt;

&lt;p&gt;And obviously, as mentioned, I will evaluate the response from the model from the "Best of 3."&lt;/p&gt;

&lt;p&gt;Let's start off the test with something interesting:&lt;/p&gt;

&lt;p&gt;Here's the prompt used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;This&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;project&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;already&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;has&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;search&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;input&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;home&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;page&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;lets&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;users&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;find&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;tools.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;I&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;want&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;an&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;improved,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;global&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;idea&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;works&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;an&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;**Action&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Palette**,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;similar&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;what&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;see&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;editors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;like&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;VS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Code.&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;**What&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;build**&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Pressing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;**Ctrl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;K**&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Cmd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;K&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;macOS)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;open&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;centered&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;action&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;palette&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;overlay&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;anywhere&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;app.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;palette&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;support:&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Searching&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;navigating&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;tools&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(reuse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;existing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;tool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;metadata)&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Executing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;actions,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;such&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;as:&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Toggle&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;dark&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;mode&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Switch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;language&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Toggle&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;filter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(General&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Developer)&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Navigate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Home&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Bookmarks&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Clear&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;recently&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;used&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;tools&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Fully&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;keyboard-driven&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;experience:&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;filter&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Arrow&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;keys&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;navigate&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Enter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;execute&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Escape&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;close&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;**Notes**&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;This&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;replace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;existing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;home&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;page&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;search.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Think&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;more&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;powerful,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;global&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;combines&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;navigation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;actions.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;implementation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;follow&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;existing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;patterns,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;styling,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;state&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;management&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;used&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;codebase.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  GPT-5.2-Codex (high)
&lt;/h4&gt;

&lt;p&gt;GPT-5.2 handled this surprisingly well. The implementation was solid end to end, and it basically one-shotted the entire feature set, including i18n support, without needing multiple correction passes.&lt;/p&gt;

&lt;p&gt;That said, it did take a bit longer than some other models (~20 minutes), which is expected since reasoning was explicitly set to &lt;strong&gt;high&lt;/strong&gt;. You can clearly see the model spending more time thinking through architecture, naming, and edge cases rather than rushing to output code. The trade-off felt worth it here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r0rf1kkm4x2nlqpmnyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r0rf1kkm4x2nlqpmnyg.png" alt="gpt 5.2 high model timing to finish a task"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The token usage was noticeably higher due to the reasoning set to high, but the output code reflected that.&lt;/p&gt;

&lt;p&gt;Here's the demo:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/QCXB5bv4-L4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/6a8eea20c34d31429b254c82079a1972" rel="noopener noreferrer"&gt;GPT-5.2 High Code&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; ~$0.9-1.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; ~20 minutes (API time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; +540 lines, minimal removals&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Usage:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total:&lt;/strong&gt; ~203k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input:&lt;/strong&gt; ~140k (+ cached context)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output:&lt;/strong&gt; ~64k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning tokens:&lt;/strong&gt; ~47k&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;NOTE:&lt;/strong&gt; I ran the exact same prompt with the same model using the default (medium) reasoning level. The difference was honestly massive. With reasoning set to high, the quality of the code, structure, and pretty much everything jumps by miles. It’s not even a fair comparison.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg35u0w8yip2r8myxqlf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg35u0w8yip2r8myxqlf.png" alt="gpt 5.2 model token usage to finish a task"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Claude Opus 4.5
&lt;/h4&gt;

&lt;p&gt;Claude went all in and prepared a ton of different strategies. At the start, it did run into build issues, but it kept running the build until it was able to fix all the build and lint issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feib2ks93r37revcoqg3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feib2ks93r37revcoqg3e.png" alt="claude opus 4.5 build error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The entire run took me about &lt;strong&gt;7 minutes 50 seconds&lt;/strong&gt;, which is the fastest among the models for this test. The features all worked as asked, and obviously, the UI looked super nice and exactly how I expected.&lt;/p&gt;

&lt;p&gt;Here's the demo:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/Gki_kO6o4Qw"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/5403f82ea5cf5991c14bc43ce3f47476" rel="noopener noreferrer"&gt;Claude Opus 4.5 Code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To be honest, this exceeded my expectations; even the i18n texts are added and displayed in the UI just as expected. Absolute cinema!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; $0.94&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; 7 min 50 sec (API Time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; +540 lines, -9 lines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7junvt7jb8wulyvnwnce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7junvt7jb8wulyvnwnce.png" alt="claude opus 4.5 token usage to finish a task"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Gemini 3 Pro
&lt;/h4&gt;

&lt;p&gt;Gemini 3 got it working, but it's clearly not on the same level as GPT-5.2 High or Claude Opus 4.5. The UI it built is fine and totally usable, but it feels a bit barebones, and you don't get many choices in the palette compared to the other two.&lt;/p&gt;

&lt;p&gt;One clear miss is that language switching does not show up inside the action palette at all, which makes the i18n support feel incomplete even though translations technically exist.&lt;/p&gt;

&lt;p&gt;Here's the demo:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/2jxnkna5OmA"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/07d46534f0f3e2523ddc2f3e4c814795" rel="noopener noreferrer"&gt;Gemini 3 Pro Code&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; Low (helped significantly by cache reads)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; ~10 minutes 49 seconds (API Time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; +428 lines, -65 lines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Usage:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input:&lt;/strong&gt; ~79k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache Reads:&lt;/strong&gt; ~536k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output:&lt;/strong&gt; ~10.7k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Savings:&lt;/strong&gt; ~87% of input tokens served from cache&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzef5ujwyq1f5o19e7dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzef5ujwyq1f5o19e7dg.png" alt="gemini 3 pro token usage to finish a task"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall, Gemini 3 lands in a very clear third place here. It works, the UI looks fine, and nothing is completely broken, but compared to the depth, completeness, and polish of GPT-5.2 High and Claude Opus 4.5, it feels behind.&lt;/p&gt;
&lt;h3&gt;
  
  
  Test 2: Tool Usage Analytics + Insights Dashboard
&lt;/h3&gt;

&lt;p&gt;This test is a step up from the action palette.&lt;/p&gt;

&lt;p&gt;You can find the prompt I've used here: &lt;a href="https://gist.github.com/shricodev/637b453d206554b78eabd38fa159084d" rel="noopener noreferrer"&gt;Prompt&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  GPT-5.2-Codex (high)
&lt;/h4&gt;

&lt;p&gt;GPT-5.2 absolutely nailed this one.&lt;/p&gt;

&lt;p&gt;The final result turned out amazing. Tool usage tracking works exactly as expected, data persists correctly, and the dashboard feels like a real product feature. Most used tools, recent usage, filters, everything just works.&lt;/p&gt;

&lt;p&gt;One really nice touch is that it also wired analytics-related actions into the Action Palette from Test 1.&lt;/p&gt;

&lt;p&gt;It did take a bit longer than the first test, around 26 minutes, but again, that’s the trade-off with high reasoning. You can tell the model spent time thinking through data modeling, reuse, and avoiding duplicated logic. Totally worth it here.&lt;/p&gt;

&lt;p&gt;Here’s the demo:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/8RUeWl_09nY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/b89de0278911b289d941b8129df69d66" rel="noopener noreferrer"&gt;GPT-5.2 High Code&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; ~$1.1–1.2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; ~26 minutes (API time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; Large multi-file update, cleanly structured&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Usage:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total:&lt;/strong&gt; ~236k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input:&lt;/strong&gt; ~162k (+ heavy cached context)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output:&lt;/strong&gt; ~75k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning tokens:&lt;/strong&gt; ~57k&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GPT-5.2 High continues to be slow but extremely powerful, and for a task like this, that’s a very good trade.&lt;/p&gt;
&lt;h4&gt;
  
  
  Claude Opus 4.5
&lt;/h4&gt;

&lt;p&gt;Claude Opus 4.5 did great here as well.&lt;/p&gt;

&lt;p&gt;The final implementation works end to end, and honestly, from a pure UI and feature standpoint, it’s hard to tell the difference between this and GPT-5.2 High. The dashboard looks clean, the data makes sense, and the filters work as expected.&lt;/p&gt;

&lt;p&gt;Here’s the demo:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/-npHfTxicF4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/934c3841101c073b50a5dad18746d78d" rel="noopener noreferrer"&gt;Claude Opus 4.5 Code&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; $1.78&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; ~8 minutes (API Time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; +1,279 lines, -17 lines&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Gemini 3 Pro
&lt;/h4&gt;

&lt;p&gt;Gemini 3 Pro gets the job done, but it clearly takes a more minimal approach compared to GPT-5.2 High and Claude Opus 4.5.&lt;/p&gt;

&lt;p&gt;That said, the overall experience feels very bare minimum. The UI is functional but plain, and the dashboard lacks the polish and depth you get from the other two models.&lt;/p&gt;

&lt;p&gt;Also, it didn't quite add the button to view the analytics right in the action palette, similar to the other two models.&lt;/p&gt;

&lt;p&gt;Here’s the demo:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/JuQjYnY-XGE"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;You can find the code it generated here: &lt;a href="https://gist.github.com/shricodev/cd2ceb9d4a6a1f53abd274cd1efc89ba" rel="noopener noreferrer"&gt;Gemini 3 Pro Code&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; Low, with heavy cache utilization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration:&lt;/strong&gt; ~5 minutes (API Time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Changes:&lt;/strong&gt; +351 lines, -3 lines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Usage:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input:&lt;/strong&gt; ~67k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output:&lt;/strong&gt; ~7.1k&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Savings:&lt;/strong&gt; ~85%+ input tokens served from cache&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, Gemini 3 Pro remains efficient and reliable, but in a comparison like this, efficiency alone is not enough. 🤷‍♂️&lt;/p&gt;


&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At least from this test, I can conclude that the models are now pretty much able to one-shot a decent complex work, at least from what I tested.&lt;/p&gt;

&lt;p&gt;Still, there have been times when the models mess up so badly that if I were to go ahead and fix the problems one by one, it would take me nearly the same time as building it from scratch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxv5kpey20fduyyqrh3e.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxv5kpey20fduyyqrh3e.gif" alt="dog sideeye gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I compare the results across models, Opus 4.5 definitely takes the crown. But I still don’t think we’re anywhere close to relying on it for real, big production projects. The recent improvements are honestly insane, but the results still don’t fully back them up.&lt;/p&gt;

&lt;p&gt;For now, I think these models are great for refactoring, planning, and helping you move faster. But if you solely rely on their generated code, the codebase just won’t hold up long term.&lt;/p&gt;

&lt;p&gt;I don't see any of these recent models as “use it and ship it” for "production," in a project with millions of lines of code, at least not in the way people hype it up.&lt;/p&gt;

&lt;p&gt;Let me know your thoughts in the comments.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__1127015"&gt;
    &lt;a href="/shricodev" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1127015%2F1c5e48a2-f602-4e7d-8312-3c0322d155c6.jpg" alt="shricodev image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/shricodev"&gt;Shrijal Acharya&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/shricodev"&gt;Full Stack SDE • Open-Source Contributor • Collaborator @Oppia • Mail for collaboration&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>ai</category>
    </item>
    <item>
      <title>Introducing Agentic Chart Extraction</title>
      <dc:creator>Diptanu Gon Choudhury</dc:creator>
      <pubDate>Wed, 14 Jan 2026 11:31:43 +0000</pubDate>
      <link>https://forem.com/tensorlake/agentic-chart-extraction-1hji</link>
      <guid>https://forem.com/tensorlake/agentic-chart-extraction-1hji</guid>
      <description>&lt;h2&gt;
  
  
  Unlocking Visual Data: Introducing Agentic Chart Extraction
&lt;/h2&gt;

&lt;p&gt;At Tensorlake, we're excited to announce a powerful new capability in our document parsing pipeline: &lt;strong&gt;Agentic Chart Extraction&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Agentic Chart Extraction uses an agentic approach to transform static images into dynamic, usable data, unlocking a new layer of value from your documents. Whether you are processing financial reports, scientific papers, or business presentations, you can now access the data behind the visuals.&lt;/p&gt;

&lt;p&gt;In the example below, on the left side we find a scatter plot with a larger of points and to the right, the plotting of this data after being processed by our Agentic Chart Extraction. This is challenging since there is a large number of uncorrelated points. We show that our system can generate a structured output that matches the original chart and that we can use this output to replot the chart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqix8siyn3sk4rwdyfi61.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqix8siyn3sk4rwdyfi61.webp" alt="Chart Extraction: Example 1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Capabilities
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chart type detection:&lt;/strong&gt; High accuracy across common chart types (line, bar, scatter, pie).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data series extraction:&lt;/strong&gt; Returns structured series (category/value pairs, coordinates where available) ready for plotting or analytics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robustness:&lt;/strong&gt; Handles multi-series charts, varying axis scales, and dense point clouds; retains good fidelity even on 50+ point series.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deliverables:&lt;/strong&gt; JSON outputs per-chart, evaluation reports, and plottable visualizations.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Supported Output Schemas
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pie chart:&lt;/strong&gt; slice-centric schema with &lt;code&gt;label&lt;/code&gt;, &lt;code&gt;value&lt;/code&gt; and optional &lt;code&gt;percentage&lt;/code&gt;, &lt;code&gt;colors&lt;/code&gt;, and display flags (good for donut/pie summarization use-cases).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bar chart:&lt;/strong&gt; supports &lt;code&gt;vertical&lt;/code&gt;/&lt;code&gt;horizontal&lt;/code&gt;, named &lt;code&gt;series&lt;/code&gt; for grouped/stacked bars, &lt;code&gt;x_axis.categories&lt;/code&gt;, optional axis bounds/formatting, and per-bar display flags — ideal for categorical comparisons and time-binned revenue/metrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Line chart:&lt;/strong&gt; x/y axis definitions, explicit &lt;code&gt;values&lt;/code&gt; for x-axis (numeric or categorical), multiple &lt;code&gt;series&lt;/code&gt; with styling (&lt;code&gt;color&lt;/code&gt;, &lt;code&gt;line_style&lt;/code&gt;, &lt;code&gt;marker&lt;/code&gt;), and plotting hints (&lt;code&gt;legend_position&lt;/code&gt;, &lt;code&gt;grid&lt;/code&gt;) — suited for trends and dense time-series.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scatter plot:&lt;/strong&gt; per-series &lt;code&gt;x_data&lt;/code&gt;/&lt;code&gt;y_data&lt;/code&gt; arrays, marker styling (&lt;code&gt;size&lt;/code&gt;, &lt;code&gt;alpha&lt;/code&gt;, &lt;code&gt;edge_color&lt;/code&gt;) and axis bounds — used for point-wise analyses and correlation extraction.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Schema-Driven Outputs — Directly Plottable
&lt;/h2&gt;

&lt;p&gt;All extracted predictions conform to the predefined JSON schemas (pie/bar/line/scatter). That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistent ingestion:&lt;/strong&gt; you can build a single parser that consumes every chart JSON produced by our system — no per-chart ad-hoc parsing required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct re-plotting:&lt;/strong&gt; each JSON contains numeric arrays plus rendering hints (axis labels, series names, colors, markers). The JSON can be fed directly into plotting code or BI tools to regenerate visuals.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Availability
&lt;/h2&gt;

&lt;p&gt;Chart extraction is currently available in all &lt;code&gt;OCR models&lt;/code&gt;. As shown in the example below, charts are extracted and structured in a consistent JSON format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl03639ho8ujdxr6ybrvg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl03639ho8ujdxr6ybrvg.webp" alt="Chart Extraction: Example 2" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Additional Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Bar Chart Example
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Original:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6amhs7lxdg3l2wzpra3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6amhs7lxdg3l2wzpra3.webp" alt="Chart Extraction: Example 3 (Original)" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plotted Prediction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz03y1bds4iofpvaxt1rj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz03y1bds4iofpvaxt1rj.webp" alt="Chart Extraction: Example 3 (Plotted Prediction)" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Predicted JSON:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bar_chart"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Annual Energy Consumption by Source (TWh)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"orientation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"vertical"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"x_axis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REGION"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"categories"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"North America"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Europe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Asia"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Africa"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"y_axis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TWh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"format"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"number"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"series"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Solar"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;150&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#FFD700"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"show_values"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Wind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;220&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#00BFFF"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"show_values"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Hydro"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;250&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#32CD32"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"show_values"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Nuclear"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;450&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;280&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#FF4500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"show_values"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Fossil Fuels"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;350&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#B0B0B0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"show_values"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bar_style"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"grouped"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"grid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scatter Plot Example
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Original:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx14pzai1kazf0r9r2ae3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx14pzai1kazf0r9r2ae3.webp" alt="Chart Extraction: Example 4 (Original)" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plotted Prediction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqnq0w9ptoib8rpki68i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqnq0w9ptoib8rpki68i.webp" alt="Chart Extraction: Example 4 (Plotted Prediction)" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Predicted JSON:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"scatter_plot"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Urban vs Rural: Income vs Spending"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"x_axis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Annual Income ($k)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;150&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scale"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linear"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"y_axis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Annual Spending ($k)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scale"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linear"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"series"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Urban"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"x_data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;67&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;72&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;82&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;93&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;94&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;96&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;96&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;110&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;111&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;125&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;130&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;134&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"y_data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;59&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;67&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;93&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;79&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#5da5da"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"marker"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"o"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"alpha"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.75&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Rural"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"x_data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;61&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;63&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;77&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;82&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;98&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;105&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;109&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;115&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;119&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;121&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;121&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;125&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;129&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;131&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;133&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;134&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"y_data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;63&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;63&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;78&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;77&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;78&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;89&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;81&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;78&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#faa43a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"marker"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"alpha"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.75&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"legend_position"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"upper right"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"grid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Linear Chart Example
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Original:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7afxwoeufybqti08gio.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7afxwoeufybqti08gio.webp" alt="Chart Extraction: Example 5 (Original)" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plotted Prediction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gav0dqoi5lfwoi6iz2z.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gav0dqoi5lfwoi6iz2z.webp" alt="Chart Extraction: Example 5 (Plotted Prediction)" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Predicted JSON:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"line_chart"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Uncorrelated Remote Sensor Readings"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"x_axis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Observation Minute"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"values"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;54&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;72&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;74&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="mi"&gt;76&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;78&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;82&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;84&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;86&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;92&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;94&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;96&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;98&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;102&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;104&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;108&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="mi"&gt;110&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;112&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;114&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;116&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;118&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;122&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;124&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;126&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;130&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;132&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;134&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;136&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;138&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="mi"&gt;140&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;142&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;144&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;146&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;148&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;150&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scale"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linear"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"y_axis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"scale"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linear"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"series"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Room A (Stable)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;54&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;54&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;61&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;54&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#F472B6"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line_style"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Room B (Cooling)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;81&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;76&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;79&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;76&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;79&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;79&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;67&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;71&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;63&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;72&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;63&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;61&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;67&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;63&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;61&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;67&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;59&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;59&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#9CA3AF"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line_style"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Room C (Cyclic)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#FDE047"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line_style"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Outdoor (Variable)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;52&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#9CD9D3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line_style"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"legend_position"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"upper right"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"grid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  SDK Usage
&lt;/h2&gt;

&lt;p&gt;Install or update to the latest version of &lt;code&gt;tensorlake&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; tensorlake
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can enable chart extraction in your parse &lt;a href="https://docs.tensorlake.ai/document-ingestion/parsing/read#charts" rel="noopener noreferrer"&gt;request&lt;/a&gt; by selecting as an &lt;code&gt;enrichment option&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorlake.documentai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DocumentAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorlake.documentai.models.options&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;EnrichmentOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;enrichment_options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;EnrichmentOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;chart_extraction&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;doc_ai&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DocumentAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;API_KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;parse_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc_ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;file_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file_XXX&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your file ID or URL
&lt;/span&gt;  &lt;span class="n"&gt;enrichment_options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;enrichment_options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  API Usage
&lt;/h2&gt;

&lt;p&gt;You can enable chart extraction in your parse &lt;a href="https://docs.tensorlake.ai/document-ingestion/parsing/read#charts" rel="noopener noreferrer"&gt;request&lt;/a&gt; by selecting as an &lt;code&gt;enrichment option&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;POST&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/api/v&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;/parse&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"enrichment_options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"chart_extraction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>rag</category>
      <category>programming</category>
    </item>
    <item>
      <title>Claude Cowork: Architecture, Capabilities, and Usage Overview</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Tue, 13 Jan 2026 20:50:51 +0000</pubDate>
      <link>https://forem.com/tensorlake/claude-cowork-architecture-capabilities-and-usage-overview-1ofn</link>
      <guid>https://forem.com/tensorlake/claude-cowork-architecture-capabilities-and-usage-overview-1ofn</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;TL;DR&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Understand what Cowork is and how it extends Claude Code’s agentic capabilities to non-coding, knowledge-based work.&lt;/li&gt;
&lt;li&gt;Learn how Cowork differs from standard chat-based AI interactions and why file access changes the workflow.&lt;/li&gt;
&lt;li&gt;Get a high-level view of Cowork’s execution model, safety boundaries, and supported use cases.&lt;/li&gt;
&lt;li&gt;Know who Cowork is for, what it can (and cannot) do today, and how to approach it responsibly as a beginner.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Over the past year, large language models have moved beyond simple question–answer interactions toward systems that can plan, act, and execute multi-step tasks. One of the earliest practical examples of this shift was Claude Code, which allowed developers to delegate real work, such as editing files, running commands, and managing projects, to an AI agent. While this capability was initially designed for programming workflows, users quickly began applying it to a much broader range of tasks.  &lt;/p&gt;

&lt;p&gt;

&lt;iframe class="tweet-embed" id="tweet-2010805682434666759-102" src="https://platform.twitter.com/embed/Tweet.html?id=2010805682434666759"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-2010805682434666759-102');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2010805682434666759&amp;amp;theme=dark"
  }





&lt;/p&gt;

&lt;p&gt;This led to the introduction of &lt;a href="https://claude.com/blog/cowork-research-preview" rel="noopener noreferrer"&gt;Claude Cowork&lt;/a&gt;, released as a research preview on January 12, 2026. Cowork brings the same agentic architecture that powers Claude Code into the Claude Desktop app, making it accessible for knowledge work that does not involve writing code. &lt;/p&gt;

&lt;p&gt;This article explains how Cowork works, how it differs from standard chat, and what practical limitations and safety considerations to understand before using it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Cowork?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff65kmey5tz4dqf4dg0iq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff65kmey5tz4dqf4dg0iq.png" alt="Image1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://support.claude.com/en/articles/13345190-getting-started-with-cowork" rel="noopener noreferrer"&gt;Claude Cowork&lt;/a&gt; is an agentic task execution mode in the Claude Desktop app that allows Claude to plan and complete multi-step work on your behalf. Instead of responding to individual prompts, Claude operates on a shared folder, where it can read, edit, and create files to produce complete outputs based on the outcome you describe.&lt;/p&gt;

&lt;p&gt;This differs from regular Claude conversations, which are optimized for short, interactive exchanges and require users to manage context and files manually. In Cowork, Claude maintains task context over time, breaks work into subtasks when needed, and progresses toward completion while keeping you informed and asking for approval before significant actions.&lt;/p&gt;

&lt;p&gt;Cowork is currently &lt;a href="https://t.co/kWnDq1psWQ" rel="noopener noreferrer"&gt;available as a research preview&lt;/a&gt; for Max plan subscribers using the Claude Desktop app on macOS. It is not supported on web or mobile, does not persist across sessions, and is still evolving based on user feedback. For early access, you can join the waitlist &lt;a href="https://forms.gle/mtoJrd8kfYny29jQ9" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Core Design Principles&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Below are the main principles that guide how Cowork is designed and how tasks are executed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outcome-oriented task execution:&lt;/strong&gt; Cowork focuses on completing a clearly defined outcome rather than responding to individual prompts. Users describe what they want done, and Claude determines the steps needed to reach that result.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent task planning and execution:&lt;/strong&gt; Once a task starts, Claude maintains context throughout the execution. It creates a plan, breaks work into subtasks when required, and continues working without losing state or requiring repeated instructions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced back-and-forth interaction model:&lt;/strong&gt; Cowork minimizes conversational overhead. Users do not need to repeatedly guide each step; instead, Claude progresses independently and allows input or corrections when necessary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency and user steering during execution:&lt;/strong&gt; Claude surfaces its plan and progress while working. Users can monitor actions, intervene mid-task, or adjust direction before significant changes are made.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;System Architecture Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cowork is built on the same agentic foundations as Claude Code, but designed to operate safely and transparently within a desktop environment. The following components define how Cowork executes tasks and interacts with your system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local execution on the user’s machine:&lt;/strong&gt; Cowork runs directly within the Claude Desktop app on your computer. This allows Claude to work with local files and deliver outputs straight to your file system without requiring manual uploads or downloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Machine (VM) isolation:&lt;/strong&gt; Task execution happens inside an isolated virtual machine environment. This separation limits the impact of errors or malicious behavior while still allowing Claude to perform real work on files you explicitly share.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File system access model:&lt;/strong&gt; Claude can read, write, create, modify, and delete files only within the folders you grant access to. All file access is explicit, and Claude requests approval before taking significant or potentially destructive actions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet and connector usage:&lt;/strong&gt; Cowork can use Claude’s existing connectors to access external information. When paired with Claude in Chrome, it can also perform tasks that require browser access, with network permissions remaining under user control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared agentic architecture with Claude Code:&lt;/strong&gt; Cowork uses the same planning, task decomposition, and sub-agent coordination model as Claude Code. The key difference is accessibility: Cowork exposes these capabilities through the desktop interface for non-coding, knowledge-based workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Cowork Executes a Task&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0z566sfhguqaw33y331.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0z566sfhguqaw33y331.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are the steps Cowork follows once you assign a task, from understanding your request to delivering the final output:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Task description and intent analysis&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you describe a task, Claude first analyzes the intended outcome rather than treating it as a single prompt. It identifies the scope of work, required resources, and any constraints based on the files and permissions you’ve provided.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Planning and decomposition into subtasks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Claude creates a structured plan and breaks the work into smaller, manageable subtasks when needed. This planning phase allows Claude to sequence actions logically and handle complex workflows without losing context.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Sub-agent coordination and parallel execution&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For more complex tasks, Claude may coordinate multiple internal workstreams in parallel. This allows different parts of the task to progress simultaneously, improving efficiency while maintaining alignment with the overall plan.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Progress visibility and user intervention&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As work progresses, Cowork surfaces what Claude is doing and why. You can monitor progress, step in to provide clarification, or course-correct before significant actions are taken.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Output delivery to the local file system&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once the task is complete, Claude writes the final outputs directly to your local file system. Files are created, updated, or organized within the shared folders, ready for immediate use without additional formatting or manual handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key Capabilities of Cowork&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://x.com/claudeai/status/2010805682434666759?s=46&amp;amp;t=ITFEhgS1UDCTOTk1CP2qIg" rel="noopener noreferrer"&gt;Cowork&lt;/a&gt; is designed to support complex, real-world knowledge work by combining agentic execution with direct access to files and external information. The capabilities below define what Cowork can reliably handle today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdn8x18peax7kt18wkgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdn8x18peax7kt18wkgc.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Local File Access&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cowork allows Claude to read, write, create, and modify files within folders you explicitly share. This removes the need for manual uploads or downloads and enables end-to-end task execution. Access is controlled at the folder level, giving you precise control over what Claude can see and change.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Multi-Step and Long-Running Tasks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cowork can handle workflows that require multiple steps without losing context. Tasks can run for extended periods without conversational timeouts interrupting execution. This makes it suitable for work that would otherwise require many separate chat interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Professional Output Generation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Claude can produce finished deliverables rather than raw text outputs. This includes structured documents, spreadsheets with working formulas, and presentation files that are properly formatted and ready for use. Outputs are written directly to your local file system.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Browser and Connector Integration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cowork can use your existing connectors to access external information sources. When paired with Claude in Chrome, it can also perform tasks that require browser access, with network permissions remaining under your control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Typical Use Cases
&lt;/h2&gt;

&lt;p&gt;Cowork is best suited for tasks that involve multiple steps, file access, and sustained execution. Below are common categories of work where its agentic model is especially effective:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File and document management:&lt;/strong&gt; Organizing folders, renaming large batches of files, and converting collections of receipts or screenshots into structured expense reports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research and knowledge synthesis:&lt;/strong&gt; Aggregating information from multiple sources, combining notes or documents, and extracting key themes or action items from transcripts and written material.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document and presentation creation:&lt;/strong&gt; Turning rough or unstructured inputs such as notes, voice memos, or drafts into well-formatted documents and presentation files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data processing and analysis:&lt;/strong&gt; Cleaning and transforming datasets, performing statistical analysis, and generating visualizations directly from local data files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hdj6j3h1caoobc30yqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hdj6j3h1caoobc30yqj.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Getting Started with Cowork&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This section outlines what you need to use Cowork and how to start your first task. Since Cowork operates directly on your computer and can take real actions, setup and access are intentionally controlled.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Requirements&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To use Cowork, the following conditions must be met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Desktop app on macOS:&lt;/strong&gt; Cowork is available only through the &lt;a href="https://support.claude.com/en/articles/10065433-installing-claude-desktop" rel="noopener noreferrer"&gt;Claude Desktop application&lt;/a&gt; and is not supported on web or mobile clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Max plan subscription:&lt;/strong&gt; Access to Cowork is currently limited to users on the Max plan as part of a research preview.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An active internet connection:&lt;/strong&gt; An internet connection is required throughout the session for task execution and coordination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Accessing Cowork&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once the requirements are met, starting Cowork is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Claude Desktop app on macOS.&lt;/li&gt;
&lt;li&gt;Use the mode selector to switch from &lt;strong&gt;Chat&lt;/strong&gt; to &lt;strong&gt;Cowork&lt;/strong&gt;, which appears as &lt;strong&gt;Tasks&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Describe the task you want Claude to complete and specify the desired outcome.&lt;/li&gt;
&lt;li&gt;Review Claude’s proposed approach and approve it before execution begins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The desktop app must remain open while Cowork is running. Closing the app or allowing the system to sleep will end the session and stop the task.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Usage, Permissions, and Security Considerations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cowork is designed for heavier, more complex work than standard chat, which affects both usage limits and security responsibilities. Understanding these trade-offs is important for using it effectively and safely.&lt;/p&gt;

&lt;p&gt;Cowork tasks consume more usage than regular chat because they involve planning, sub-task coordination, and long-running execution. Complex workflows can reach usage limits faster than conversational interactions.&lt;/p&gt;

&lt;p&gt;You can use Cowork for tasks that truly benefit from file access and multi-step execution. Batch related work into a single task when possible, and rely on standard chat for simpler or exploratory requests.&lt;/p&gt;

&lt;p&gt;Standard chat is better for quick questions, drafting text, or brainstorming. Cowork is more appropriate when the task requires persistent context, structured outputs, or direct interaction with local files.&lt;/p&gt;

&lt;p&gt;Below are the permissions and security model of Cowork: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Folder-level access control:&lt;/strong&gt; You explicitly choose which folders Claude can access. Claude cannot see or modify files outside of the permissions you grant, and access should be limited to only what the task requires.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet access permissions:&lt;/strong&gt; Claude’s network access is restricted by default. If you extend access, such as through browser integration, you should limit it to trusted sources to reduce risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP (desktop extension) permissions:&lt;/strong&gt; Desktop extensions expand Claude’s capabilities but also increase the attack surface. Each extension should be evaluated carefully, and permissions should be granted only when necessary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User responsibility for granted access:&lt;/strong&gt; You remain responsible for all actions Claude performs on your behalf, including file changes, data access, and interactions with external systems. Monitoring tasks and reviewing plans before approval is essential.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Current Limitations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cowork is released as a research preview to better understand how agentic systems perform in real-world, non-coding workflows. Early access allows the team to observe usage patterns, identify failure modes, and improve safety mechanisms before wider adoption.&lt;/p&gt;

&lt;p&gt;Cowork currently has the following limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;macOS-only availability:&lt;/strong&gt; Cowork is available only through the Claude Desktop app on macOS and is not supported on web, mobile, or Windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No cross-device sync:&lt;/strong&gt; Tasks and outputs do not sync across devices, even when using the same account.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No session persistence:&lt;/strong&gt; The Claude Desktop app must remain open during execution. Closing the app or putting the system to sleep ends the session.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No memory across sessions:&lt;/strong&gt; Cowork does not retain context or task history between sessions. Each task starts with a clean state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No project or sharing support:&lt;/strong&gt; Cowork cannot be used within projects, and sessions or artifacts cannot be shared with others.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Cowork represents a practical step beyond conversational AI by enabling Claude to plan and complete multi-step work with direct access to local files. It is best suited for tasks that require sustained context, structured outputs, and minimal manual coordination. &lt;/p&gt;

&lt;p&gt;At the same time, its agentic nature makes permissions, oversight, and task scoping essential. Used thoughtfully, Cowork can simplify complex knowledge work while keeping users in control of how and where actions are taken.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Gemini 3 vs GPT-5.2: Detailed Coding Comparison</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Tue, 16 Dec 2025 17:46:19 +0000</pubDate>
      <link>https://forem.com/tensorlake/gemini-3-vs-gpt-52-detailed-coding-comparison-idg</link>
      <guid>https://forem.com/tensorlake/gemini-3-vs-gpt-52-detailed-coding-comparison-idg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Gemini 3 Pro has strong multimodal capabilities but produces simpler, less structured coding outputs. GPT-5.2 delivers more reliable reasoning and polished, production-ready code with minimal cleanup needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By the end of 2025, two major AI releases had reshaped how developers build and ship software: &lt;a href="https://openai.com/index/introducing-gpt-5-2/" rel="noopener noreferrer"&gt;OpenAI’s GPT-5.2&lt;/a&gt; and &lt;a href="https://deepmind.google/models/gemini/pro/" rel="noopener noreferrer"&gt;Google’s Gemini 3 Pro&lt;/a&gt;. Both bring improvements in reasoning, coding assistance, and multimodal capabilities, but they take different approaches to solving developer problems.&lt;/p&gt;

&lt;p&gt;Gemini 3 Pro was officially launched on November 18, 2025, as Google’s most advanced multimodal model yet, designed to handle complex text, image, audio, and video tasks alongside code and reasoning. &lt;/p&gt;

&lt;p&gt;Long-Term Memory:&lt;br&gt;
Episodic: Stores specific past events or "episodes" of interactions (e.g., "The user mentioned they preferred Python over Java last week").&lt;br&gt;
Semantic: A repository of general facts, world knowledge, and entity relationships (e.g., "Paris is the capital of France").&lt;br&gt;
Procedural: Contains the skills and "how-to" logic for performing tasks, often encoded as system prompts, scripts, or specific tool-calling protocols.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxt0c3n3hdgtbw44yk1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxt0c3n3hdgtbw44yk1u.png" alt="Image1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few weeks later, GPT-5.2 was released on December 11, 2025, with a focus on predictable reasoning, strong code generation, and reliable long-context performance. It’s built to support workflows where accuracy and consistency matter most.&lt;/p&gt;

&lt;p&gt;Because each model excels in different areas, the best choice often depends on the type of project you are building. To make that clearer, this article compares both models and evaluates them through a set of practical coding challenges to see how they perform in real development scenarios.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Comparing GPT-5.2 and Gemini 3 Pro&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Both models are fully multimodal and both support tool use, reasoning chains, and long-context inputs large enough to handle full repositories or multi-service architectures. They are built with safety controls and enterprise features, but they lean in different directions.&lt;/p&gt;

&lt;p&gt;Gemini 3 Pro fits naturally into Google’s ecosystem, with tighter integrations across Search, YouTube, and media-centric workflows. It tends to feel more “creative” out of the box, especially for interactive or multimodal builds.&lt;/p&gt;

&lt;p&gt;GPT-5.2, on the other hand, focuses on predictable reasoning and developer-grade code generation. Its step-by-step “Thinking” mode makes debugging, refactoring, and architecture planning straightforward, and its ChatGPT interface remains one of the most polished for technical work.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Benchmark Breakdown&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Using numbers reported at launch, GPT-5.2 generally leads in structured reasoning and software engineering, while Gemini 3 Pro maintains an edge in multimodal depth.&lt;/p&gt;

&lt;p&gt;Below is a streamlined comparison:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Benchmark&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;What It Measures&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Gemini 3 Pro&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;GPT-5.2&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Winner&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPQA Diamond&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Graduate-level science (no tools)&lt;/td&gt;
&lt;td&gt;91.9%&lt;/td&gt;
&lt;td&gt;92.4%&lt;/td&gt;
&lt;td&gt;GPT-5.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AIME 2025&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Advanced math (no tools)&lt;/td&gt;
&lt;td&gt;95.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;GPT-5.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ARC-AGI-2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Abstract visual puzzles (verified)&lt;/td&gt;
&lt;td&gt;31.1%&lt;/td&gt;
&lt;td&gt;52.9%&lt;/td&gt;
&lt;td&gt;GPT-5.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SWE-Bench Verified&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agentic software engineering&lt;/td&gt;
&lt;td&gt;76.2%&lt;/td&gt;
&lt;td&gt;80.0%&lt;/td&gt;
&lt;td&gt;GPT-5.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MMMU-Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Multimodal reasoning with Python tools&lt;/td&gt;
&lt;td&gt;81.0%&lt;/td&gt;
&lt;td&gt;80.4%&lt;/td&gt;
&lt;td&gt;Gemini 3 Pro&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LiveCodeBench Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Competitive coding (Elo rating)&lt;/td&gt;
&lt;td&gt;2439&lt;/td&gt;
&lt;td&gt;~2500+ (inferred from SWE performance)&lt;/td&gt;
&lt;td&gt;Tie&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Overall, GPT-5.2 pulls ahead on reasoning-heavy and engineering-focused benchmarks, making it stronger for complex coding workflows. Gemini 3 Pro counters with better multimodal depth, which shows up in tasks involving images, audio, and mixed-media reasoning.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;TL;DR&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GPT-5.2 consistently produced solutions that felt complete, polished, and ready for real use. Its outputs included stronger UI design, smoother interactions, and thoughtful features that went beyond the basic prompt.&lt;/li&gt;
&lt;li&gt;Gemini 3 Pro implemented the core logic across all challenges, but the results were simpler. Interfaces were limited, customization options were minimal, and key usability features were often missing.&lt;/li&gt;
&lt;li&gt;The ARC-AGI-2 leaderboard reflects this same pattern at scale, showing GPT-5.2 with a clear advantage in reasoning-heavy tasks and overall reliability.&lt;/li&gt;
&lt;li&gt;Based on both the coding challenges and broader benchmarks, GPT-5.2 currently offers a better experience for developers who need high-quality, production-oriented code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zgroc0m1me9hisssksv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zgroc0m1me9hisssksv.png" alt="Image2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Why Coding Challenges?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Benchmarks show how a model performs in isolation, but coding challenges reveal how it behaves when you are actually building, debugging, structuring logic, handling state, and producing code that runs without fuss. They highlight the practical differences that matter in day-to-day development.&lt;/p&gt;

&lt;p&gt;So let’s put both models to work and see how they handle real coding scenarios.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Challenge 1: Build a Music Visualizer Using Audio Frequency Data&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a browser-based music visualizer that analyzes live audio frequency data using the Web Audio API and renders a real-time visualization on an HTML canvas. The visualizer should react dynamically to changes in amplitude and frequency, run smoothly at 60fps, and use efficient data processing suitable for continuous playback.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Gemini 3 Pro Response:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the generated code &lt;a href="https://github.com/Studio1HQ/Music-Visualizer" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
Output:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/BLMUMIIa7so"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT 5.2 Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the generated code &lt;a href="https://github.com/Studio1HQ/gpt-5.2-test/tree/main/test-1" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/nbYI_wOdv3k"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h3&gt;
  
  
  Outcome Summary:
&lt;/h3&gt;

&lt;p&gt;ChatGPT 5.2 delivered a more polished and feature-rich visualizer, with a better UI, customization options, and support for uploading and downloading audio files. &lt;/p&gt;

&lt;p&gt;Gemini 3 Pro produced a functional visualizer but with a plain UI, limited interactivity, and no option to upload existing audio. ChatGPT 5.2 clearly provided the stronger solution in both design and functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Challenge 2: Collaborative Markdown Editor With Live Preview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build a web-based Markdown editor that supports real-time collaboration between multiple users. The interface should include a split-pane layout with a text editor on the left and a live preview on the right. As users type, the preview should update instantly without lag. Use WebSockets or a CRDT-based sync layer to merge edits safely, avoid conflicts, and keep all clients in sync. Add basic formatting shortcuts, keyboard navigation, and a clean UI suitable for embedding into a developer tool.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Gemini 3 Pro Response:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the generated code in this &lt;a href="https://github.com/Studio1HQ/Collaborative-Markdown-Editor" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/bH4jEF8QOb0"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT 5.2 Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the generated code &lt;a href="https://github.com/Arindam200/gpt-5.2-test/tree/main/test-2" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/ZG8YVBBbm-4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h3&gt;
  
  
  Outcome Summary:
&lt;/h3&gt;

&lt;p&gt;Both models produced functional editors that met the core requirements, but ChatGPT 5.2 handled the collaborative aspect more effectively. It added features such as customizable environment names and shareable invite links, making the collaboration experience feel complete. &lt;/p&gt;

&lt;p&gt;Gemini 3 Pro implemented real-time syncing but did not present a clear collaborative environment, resulting in a less cohesive experience. ChatGPT 5.2 ultimately delivered the stronger solution for this challenge.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Challenge 3: WebAssembly Image Filter Engine&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a browser-friendly image processing engine powered by WebAssembly. Write core filters—grayscale, blur, sharpen, and invert—in C++, compile them to WASM, and expose the functions to JavaScript. The app should allow users to upload an image, apply filters with minimal latency, and preview the results instantly. Focus on efficient memory handling between WASM and JS, and design the engine so additional filters can be added without refactoring the entire pipeline.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Gemini 3 Pro Response:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the generated code here.&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/j7iWzsIYIg0"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT 5.2 Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the generated code &lt;a href="https://github.com/Arindam200/gpt-5.2-test/tree/main/test-3" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/78N_KxoQY6w"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h3&gt;
  
  
  Outcome Summary:
&lt;/h3&gt;

&lt;p&gt;Both models successfully delivered a working WebAssembly image filter engine, but ChatGPT 5.2 provided a noticeably more refined solution. It offered multiple filter controls, adjustable blur strength, an easy way to revert to the original image, and thoughtful handling of edge cases. &lt;/p&gt;

&lt;p&gt;Gemini 3 Pro produced functional filters, but the interface was limited, lacked fine-grained controls, and did not support combining multiple filters smoothly. Gemini accomplished the core task, but ChatGPT 5.2 delivered a far more complete, user-friendly, and flexible implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Final Verdict&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Across all three challenges, GPT-5.2 consistently delivered more complete, polished, and developer-ready solutions. Its interfaces were cleaner, its interactions smoother, and its features aligned closely with what real users expect. Gemini 3 Pro produced functional results, but they lacked refinement and felt limited in flexibility and usability.&lt;/p&gt;

&lt;p&gt;GPT-5.2 did not just finish the tasks. It improved them in ways that showed a deeper understanding of real development work, while Gemini 3 Pro focused mainly on core functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnn89aguzb1ppdzk5fm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnn89aguzb1ppdzk5fm0.png" alt="Image3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the tweet sums up perfectly, the gap is not closing. It is widening. At this point, GPT-5.2 stands out as the more reliable and capable choice for practical coding tasks.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>I Tried Letting Antigravity Build An Agent For Me. Here’s What Actually Happened</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Tue, 16 Dec 2025 09:56:21 +0000</pubDate>
      <link>https://forem.com/tensorlake/i-tried-letting-antigravity-build-an-agent-for-me-heres-what-actually-happened-54h2</link>
      <guid>https://forem.com/tensorlake/i-tried-letting-antigravity-build-an-agent-for-me-heres-what-actually-happened-54h2</guid>
      <description>&lt;p&gt;I have used a long list of AI coding tools over the past few years, most of them built around the same pattern, as inline suggestions, chat-style prompts, and occasional refactors. &lt;/p&gt;

&lt;p&gt;When Antigravity appeared recently, powered by Gemini 3 Pro, an agent-driven IDE where background processes can read your repo, propose changes, run commands, and interact with your app’s runtime environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkrs8de0424frooxebvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkrs8de0424frooxebvm.png" alt="Image1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I first opened it, I saw those agents spin up automatically, parse the project structure, touch multiple files, run package scripts, and even click through the app in a built-in browser. It behaved less like an autocomplete layer and more like a set of automated workflows acting directly on the codebase.&lt;/p&gt;

&lt;p&gt;That made me curious. Could this system actually deliver a complete feature if I stepped back and let it run?&lt;/p&gt;

&lt;p&gt;To find out, I assigned Antigravity a real task from my own project and observed how far the agents could get with minimal intervention.&lt;/p&gt;

&lt;p&gt;What follows is a breakdown of that experiment, what worked, what didn’t, and where agent-driven development currently stands from a practical engineering perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why I Did This Experiment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I wanted to understand how well Antigravity’s agent-driven workflow performs on realistic engineering tasks. Most AI tools can handle small changes, but a real feature spans multiple layers of a codebase. That’s why I designed a focused experiment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Experiment:&lt;/strong&gt; Guest Checkout + Abandoned Cart Email Recovery using Resend/Nodemailer with a TensorLake analytics hook.&lt;/p&gt;

&lt;p&gt;This feature was ideal because it touches several parts of a typical project. It includes backend routes, database updates, UI work, an email flow, and a small analytics integration.&lt;/p&gt;

&lt;p&gt;I ran the experiment to answer a few practical questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can autonomous agents complete work that crosses backend, frontend, database, and external services?&lt;/li&gt;
&lt;li&gt;Do they keep context when modifying multiple files and systems?&lt;/li&gt;
&lt;li&gt;How much oversight does a developer actually need to provide?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was simple. I wanted to see how far the agents could take a meaningful feature with minimal intervention and whether agent-driven development can support real-world workflows.&lt;/p&gt;

&lt;p&gt;Here is the flow of the experiment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp30bytyemedfu5zi5kq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp30bytyemedfu5zi5kq0.png" alt="Image2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To keep the experiment grounded, I used a real project from my existing codebase. The stack is fairly standard and represents what many teams use today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; React with TypeScript&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node and Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth:&lt;/strong&gt; Basic JWT authentication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Emails:&lt;/strong&gt; Resend or Nodemailer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics:&lt;/strong&gt; Tensorlake&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because the feature involved sending recovery emails, I also needed a &lt;a href="https://resend.com/api-keys" rel="noopener noreferrer"&gt;Resend API key&lt;/a&gt;. Setting that up took about two minutes. It’s just a matter of creating a Resend account, grabbing the API key, and dropping it into your &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Antigravity runs inside a unified environment, and its agents have access to several built-in tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code Agent&lt;/strong&gt;, which edits and refactors files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terminal Agent&lt;/strong&gt;, which runs scripts, migrations, and tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Browser Agent&lt;/strong&gt;, which interacts with the running application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research Agent&lt;/strong&gt;, which pulls patterns or references when needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Manager&lt;/strong&gt;, which coordinates all of them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These agents share context and can act independently. They can open files, update logic, install packages, modify migrations, and verify behavior in the browser.&lt;/p&gt;

&lt;p&gt;For the experiment, I placed the project on a clean branch and provided a clear feature description. After that, I stepped back and let the agents decide how to approach the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to Execute This Experiment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the exact flow I used so that another developer could reproduce the experiment using the same codebase. The project I worked with is available &lt;a href="https://github.com/Studio1HQ/Ecommerce-platform" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Prepare the Repo
&lt;/h3&gt;

&lt;p&gt;Before letting any agents touch the code, make sure the project is in a stable state.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the repo and install dependencies

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;git clone https://github.com/Studio1HQ/Ecommerce-platform&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm install&lt;/code&gt; or &lt;code&gt;pnpm install&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Create a new branch from &lt;code&gt;main&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;feat/guest-checkout-ag-experiment&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Confirm the app runs locally without errors

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;npm run dev&lt;/code&gt; or your project's start script&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Run the existing test suite

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;npm test&lt;/code&gt; or the equivalent command&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The goal here is simple: give Antigravity a clean baseline so any failures that follow belong to the experiment, not leftover issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Open the Project in Antigravity
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Open Antigravity&lt;/li&gt;
&lt;li&gt;Load the existing repo for this experiment&lt;/li&gt;
&lt;li&gt;Wait for the initial indexing or analysis of the project to finish&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once indexing is complete, the agents have a basic understanding of the codebase's structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Send the Main Experiment Prompt
&lt;/h3&gt;

&lt;p&gt;This is the &lt;strong&gt;exact initial prompt&lt;/strong&gt; I would use inside Antigravity to kick off the experiment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are working inside my existing project.

Feature to implement:
- Add a "Guest checkout" flow for users who are not logged in.
- Implement abandoned cart email recovery using Resend or Nodemailer.
- Add a simple Tensorlake analytics hook for cart events.

Project context:
- Frontend: React with TypeScript.
- Backend: Node with Express.
- Database: PostgreSQL.
- Auth: JWT based.
- Emails can use either Resend or Nodemailer, pick one and wire it cleanly.
- Tensorlake should be used only for a minimal event tracking integration.

Constraints:
- Treat this as a real production feature, not a demo.
- Keep changes scoped and readable.
- Prefer small, focused commits and clear structure.
- You should break this feature into missions and execute them using the Code, Terminal, and Browser tools.
- I want you to handle as much as possible. I will only step in if you get stuck or break something repeatedly.

Deliverables:
- Guest checkout flow end to end.
- Abandoned cart recovery emails with a reasonable trigger condition.
- Tensorlake event tracking for at least "cart created" and "cart abandoned".
- Tests updated or added where appropriate.

First, respond with a clear mission plan. Then start executing it step by step.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets expectations, defines the stack, and tells the agent to create a mission plan first instead of editing files immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Review and Adjust the Mission Plan
&lt;/h3&gt;

&lt;p&gt;Antigravity should reply with something like a breakdown of tasks. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design the guest checkout data model&lt;/li&gt;
&lt;li&gt;Add database changes for guest carts and orders&lt;/li&gt;
&lt;li&gt;Implement backend routes for guest checkout&lt;/li&gt;
&lt;li&gt;Implement abandoned cart identification and email sending&lt;/li&gt;
&lt;li&gt;Add frontend components for guest checkout&lt;/li&gt;
&lt;li&gt;Insert Tensorlake tracking&lt;/li&gt;
&lt;li&gt;Add or update tests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You do not need this exact list. You just need it to be coherent.&lt;/p&gt;

&lt;p&gt;At this step, you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check that it understood the feature&lt;/li&gt;
&lt;li&gt;Ask for small adjustments if something is clearly missing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Let the Agents Execute Across the Stack
&lt;/h3&gt;

&lt;p&gt;Now you let Antigravity do the heavy lifting. Typical actions you will see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code Agent

&lt;ul&gt;
&lt;li&gt;Creates or updates Express routes for guest checkout&lt;/li&gt;
&lt;li&gt;Adds controllers or services for cart and order handling&lt;/li&gt;
&lt;li&gt;Writes or updates TypeScript types and interfaces&lt;/li&gt;
&lt;li&gt;Creates React components or pages for guest checkout&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Terminal Agent

&lt;ul&gt;
&lt;li&gt;Runs migrations for new tables or columns&lt;/li&gt;
&lt;li&gt;Executes test suites&lt;/li&gt;
&lt;li&gt;Installs dependencies such as Resend, Nodemailer, or the Tensorlake SDK&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Browser Agent

&lt;ul&gt;
&lt;li&gt;Opens the local app&lt;/li&gt;
&lt;li&gt;Walks through the guest checkout UI&lt;/li&gt;
&lt;li&gt;Verifies that the flow works end-to-end&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Your job at this stage is to observe.&lt;/p&gt;

&lt;p&gt;Only intervene when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It repeatedly breaks the same thing&lt;/li&gt;
&lt;li&gt;It introduces an obviously wrong design decision&lt;/li&gt;
&lt;li&gt;It gets stuck in a loop of failing tests and incremental fixes that do not converge&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Verify the Flow with the Browser Agent
&lt;/h3&gt;

&lt;p&gt;Once the main missions complete, explicitly ask the agents to verify the behavior.&lt;/p&gt;

&lt;p&gt;Example prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Use the Browser and Terminal tools to verify the complete flow:
- Start a new cart as a guest user.
- Proceed through the guest checkout UI.
- Leave a cart in an abandoned state and trigger the recovery email logic.
- Confirm that Tensorlake tracking events are being sent where expected.

Report what you tested and what worked or failed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This forces a structured test pass rather than assuming the feature works.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Perform a Manual Review
&lt;/h3&gt;

&lt;p&gt;At the end, you should still review everything as a developer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Look at the diff for each mission&lt;/li&gt;
&lt;li&gt;Check routes, models, and migrations&lt;/li&gt;
&lt;li&gt;Check how email sending is wired&lt;/li&gt;
&lt;li&gt;Verify the Tensorlake calls do not leak sensitive data&lt;/li&gt;
&lt;li&gt;Run tests yourself&lt;/li&gt;
&lt;li&gt;Try the flow manually in the browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If needed, you can ask Antigravity to clean up small issues or style problems with targeted prompts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can find the execution of Antigravity in this &lt;a href="https://github.com/Studio1HQ/Antigravity-experiment" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Outcome
&lt;/h3&gt;

&lt;p&gt;Here is the entire execution:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/IpEc_V4nyG0"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;Here is the email:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuewven3jtsyiah6hqf1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuewven3jtsyiah6hqf1v.png" alt="Image3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Antigravity Did Surprisingly Well&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A few things stood out once the agents started working through the feature.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The task breakdown actually made sense:&lt;/strong&gt; The mission plan looked like something a real dev would sketch out before starting. Nothing over-engineered, nothing missing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The agents handed work off cleanly:&lt;/strong&gt; Code edits, migrations, test runs, and browser checks happened in a reasonable order. It felt coordinated rather than chaotic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boilerplate wasn’t a mess:&lt;/strong&gt; The generated routes, controllers, and React components were straightforward. I didn’t have to untangle odd patterns or rewrite everything.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data stayed consistent across the stack:&lt;/strong&gt; Field names, types, and payload shapes lined up. I didn’t see the usual “backend calls it one thing, frontend calls it another” issue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The email flow was wired up correctly:&lt;/strong&gt; Resend/Nodemailer setup usually gets messy, but the structure here was clear and easy to follow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tensorlake integration was small and sensible:&lt;/strong&gt; It added a couple of event hooks without turning the code into an analytics playground.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick feedback loops:&lt;/strong&gt; When something broke, the agents patched it fast without spiraling into nonsense fixes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Where It Still Stumbled (Minor, but Honest)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The backend work was mostly solid, but the UI side definitely exposed some weak spots. The agents could generate components and wiring fast, but getting everything to actually behave the way I wanted took multiple iterations.&lt;/p&gt;

&lt;p&gt;A few things stood out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend was the hardest part for the agents:&lt;/strong&gt; They could scaffold React components quickly, but the details were often off. State handling, validation, and edge cases needed several rounds of fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connecting frontend and backend wasn’t always smooth:&lt;/strong&gt; Endpoints existed, components existed, but stitching them together required back-and-forth corrections. The agents didn’t always keep both sides in sync on the first pass.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging took way longer than generation:&lt;/strong&gt; The feature would “finish” in minutes, but debugging the UI and flow took a few hours. The agents helped, but they didn’t magically remove the pain points of front-end troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even with the debugging overhead, this is a feature that would normally take me two to four days end-to-end. With Antigravity, it landed in a few hours. Not perfect, but undeniably faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Final Verdict&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By the end of the experiment, the feature shipped. It needed a few rounds of debugging, mostly on the frontend, but the entire flow was working: guest checkout, abandoned cart recovery emails, and the Tensorlake event hook.&lt;/p&gt;

&lt;p&gt;The time savings were real. What would normally take two to four days of manual work ended up condensed into a few hours of guiding and debugging the agents. The system is not flawless, but it moves fast enough that the rough edges still net out in your favor.&lt;/p&gt;

&lt;p&gt;Antigravity makes the most sense when you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multi-file scaffolding&lt;/li&gt;
&lt;li&gt;routine backend wiring&lt;/li&gt;
&lt;li&gt;repetitive refactors&lt;/li&gt;
&lt;li&gt;quick prototyping of end-to-end flows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s less ideal when you need tight UI polish, careful architectural decisions, or detailed business logic that isn’t spelled out clearly.&lt;/p&gt;

&lt;p&gt;Overall, it feels less like a replacement for a developer and more like an accelerator for one. When the agents stay within context, the speed boost is dramatic. When they drift, it still takes less time to correct them than to write everything yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Checklist for Your Own Antigravity Experiment&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Start on a &lt;strong&gt;clean branch&lt;/strong&gt; and confirm your project runs locally with all tests passing.&lt;/li&gt;
&lt;li&gt;Set up required &lt;strong&gt;environment variables&lt;/strong&gt;, including your &lt;strong&gt;Resend API key&lt;/strong&gt;, and install the &lt;strong&gt;Antigravity browser extension&lt;/strong&gt; so the Browser Agent can interact with your app.&lt;/li&gt;
&lt;li&gt;Write a &lt;strong&gt;clear, scoped feature request&lt;/strong&gt; that spans multiple areas of the stack but isn’t overly complex.&lt;/li&gt;
&lt;li&gt;Have Antigravity &lt;strong&gt;generate a mission plan first&lt;/strong&gt;, then review and adjust it before execution begins.&lt;/li&gt;
&lt;li&gt;Let the Code, Terminal, and Browser agents &lt;strong&gt;run the workflow end-to-end&lt;/strong&gt; with minimal intervention.&lt;/li&gt;
&lt;li&gt;Step in only when the agents &lt;strong&gt;lose context, loop, or consistently misinterpret something&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use the Browser Agent to &lt;strong&gt;verify the complete flow&lt;/strong&gt; once the missions finish.&lt;/li&gt;
&lt;li&gt;Do a final &lt;strong&gt;manual review&lt;/strong&gt; to check migrations, API wiring, UI behavior, and any email or analytics logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best way to understand agent-driven development is to see it operate inside your own project. Try a contained feature, let the agents run, and watch what happens. (&lt;a href="https://antigravity.google/download" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>javascript</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
    <item>
      <title>TOON vs JSON: A Token-Optimized Data Format for Reducing LLM Costs</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Thu, 11 Dec 2025 15:33:22 +0000</pubDate>
      <link>https://forem.com/tensorlake/toon-vs-json-a-token-optimized-data-format-for-reducing-llm-costs-5pl</link>
      <guid>https://forem.com/tensorlake/toon-vs-json-a-token-optimized-data-format-for-reducing-llm-costs-5pl</guid>
      <description>&lt;p&gt;Last month, I watched a production RAG pipeline burn $1940 in one weekend. A single 500-row customer table, encoded the usual way in classic JSON, did the damage. The exact same data would have cost $760 in TOON. Same model. Same answers. Same latency. 61 % fewer tokens.&lt;/p&gt;

&lt;p&gt;You might have felt it yourself. You add one extra field to your context payload. The token counter spikes by hundreds. Suddenly, you trim keys or pray the model reads the structure right. We all patch around it because JSON has been the default for twenty years.&lt;/p&gt;

&lt;p&gt;Most developers forget one detail. JSON landed in 2001. 5 years before the iPhone. 14 years before GPT-1. &lt;a href="https://www.crockford.com/about.html" rel="noopener noreferrer"&gt;Douglas Crockford&lt;/a&gt; built JSON for Ajax round-trips between browsers and servers, not for trillion-parameter models that bill you per token. Every quoted key. Every repeated field name in an array. Every curly brace made perfect sense in a world without inference pricing.&lt;/p&gt;

&lt;p&gt;In 2025, those symbols cost real money.&lt;/p&gt;

&lt;p&gt;TOON kills that cost. It preserves every piece of the JSON data model (objects, arrays, numbers, and nulls), but rewrites the text for the one reader you actually pay for the LLM itself. It replaces multiple key rows with a single header row. Drops unnecessary quotes. Uses indentation instead of braces. Adds explicit length guards to prevent the model from guessing array sizes.&lt;/p&gt;

&lt;p&gt;This article shows exactly why JSON became an accidental tax on AI work, how TOON removes that tax at the syntax level, and how you add it to your code today without rewriting your stack.&lt;/p&gt;

&lt;p&gt;If you pay for tokens, keep reading. Your next bill depends on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSON’s Legacy: A Web Standard, Not an AI One
&lt;/h2&gt;

&lt;p&gt;JSON remains the gold standard for general-purpose data interchange. Its quoted keys, braces, brackets, and commas guarantee unambiguous parsing across every programming language and make payloads easy to inspect in browser consoles. &lt;/p&gt;

&lt;p&gt;When JSON was created, those properties solved real problems. Bandwidth was the primary constraint, and token-based pricing did not exist.&lt;/p&gt;

&lt;p&gt;Today, the constraint has changed. Take a single object&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It uses ~26 tokens instead of the 6–8 that a human would count. Quotes, colons, commas, and braces each become separate subwords in modern BPE tokenizers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgtp8980k5obfu8dfu2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgtp8980k5obfu8dfu2r.png" alt="Image1" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When that object appears in a 500-row array, the key strings and surrounding punctuation repeat hundreds of times. Real-world benchmarks record 11,842 tokens for pretty-printed JSON and 4,617 tokens for the minified version. The language model receives no additional information from those repetitions; they exist solely for syntactic correctness in traditional parsers.&lt;/p&gt;

&lt;p&gt;JSON remains the best choice for REST APIs, configuration files, and any system where token counting is irrelevant. Inside LLM prompts, however, the same syntax becomes unnecessary overhead, directly increasing costs and reducing available context.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is TOON?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://toonformat.dev/" rel="noopener noreferrer"&gt;TOON&lt;/a&gt; (Token-Optimized Object Notation) is a drop-in text representation for structured data that preserves the full JSON data model, including objects, arrays, strings, numbers, booleans, and nulls. Still, it removes the punctuation and repetition that inflate token counts inside LLM prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqwkelrvf83bxs20gki5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqwkelrvf83bxs20gki5.png" alt="Image2" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rather than wrapping every object in braces and repeating keys on every row, TOON:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses indentation instead of &lt;code&gt;{}&lt;/code&gt; and &lt;code&gt;,&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Declares array structure up front so fields don’t repeat&lt;/li&gt;
&lt;li&gt;Preserves ordering and schema explicitly&lt;/li&gt;
&lt;li&gt;Streams cleanly in line-based form for RAG pipelines&lt;/li&gt;
&lt;li&gt;Round-trip losslessly back to JSON&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not a new database standard. It is not a compression algorithm. TOON gives models the data they need in the form they prefer: less syntax, more signal, fewer tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  How TOON Reduces Token Load Without Changing the Data
&lt;/h2&gt;

&lt;p&gt;When JSON is used as model input, its syntax becomes a tax; the characters required for parsing increase the token count and reduce the available reasoning space. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/toon-format/toon" rel="noopener noreferrer"&gt;TOON’s&lt;/a&gt; approach is to keep the full expressiveness of JSON while changing how the structure appears on the page. It focuses on the tokenizer as the primary consumer instead of the runtime environment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; TOON optimizes repeated structure extremely well, but it isn’t a universal compressor. Highly nested or schema-less data will see smaller savings.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk1qsn0ui2mipomqmv4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk1qsn0ui2mipomqmv4j.png" alt="TOON vs JSON" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is a closer look at the mechanisms behind that change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Indentation-Based Hierarchy Instead of Symbol-Based Delimiters
&lt;/h3&gt;

&lt;p&gt;JSON depends on punctuation to express scope. Braces define objects. Brackets define arrays. Commas separate members. Tokenizers break each of these into its own subwords.&lt;/p&gt;

&lt;p&gt;TOON moves this structural meaning into whitespace:\&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two spaces represent one nesting level&lt;/li&gt;
&lt;li&gt;Each key begins a new line when introducing a child object&lt;/li&gt;
&lt;li&gt;Context defines interpretation, not braces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example translation of nested objects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"profile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Paris"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;becomes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user:
  profile:
    city: Paris
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reduces syntactic characters while preserving deterministic parseability. The parser tracks indentation levels instead of punctuation. This is a simpler signal for models to learn.&lt;/p&gt;

&lt;h3&gt;
  
  
  Header-Driven Arrays Replace Repetition With Declarative Structure
&lt;/h3&gt;

&lt;p&gt;Uniform arrays are common in real data. JSON must repeat every field name and punctuation for every element. TOON compresses this by extracting shape into a single declaration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;items[&amp;lt;row count&amp;gt;]{&amp;lt;field order&amp;gt;}:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then comes only the values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;items[3]{sku,qty,price}:
  A12,4,19.99
  B18,1,12.50
  C22,3,9.25
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under the hood:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keys appear once&lt;/li&gt;
&lt;li&gt;Column order is guaranteed&lt;/li&gt;
&lt;li&gt;Rows are fixed-width logical tuples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On 500-row datasets, this structure often cuts the token count by more than half. The improvement scales linearly with array length.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical detection logic
&lt;/h3&gt;

&lt;p&gt;The encoder collapses an array when:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;All elements are objects&lt;/li&gt;
&lt;li&gt;They share an identical key set&lt;/li&gt;
&lt;li&gt;Order of keys is stable&lt;/li&gt;
&lt;li&gt;Null fields remain valid inline values&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Otherwise, TOON falls back to object-by-object expansion. No ambiguity or silent corruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Schema and Cardinality Propagated Into the Prompt
&lt;/h3&gt;

&lt;p&gt;JSON implies structure. TOON exposes it. Models benefit from clearly defined boundaries.&lt;/p&gt;

&lt;p&gt;Two design choices matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;[N]&lt;/code&gt; explicitly sets expected row count&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;• &lt;code&gt;{field1,field2,…}&lt;/code&gt; statically enforces column order&lt;/p&gt;

&lt;p&gt;These guide extraction tasks in a way punctuation cannot. A model that invents an extra row contradicts the declared cardinality. A misplaced field becomes visibly misaligned.&lt;/p&gt;

&lt;p&gt;This reduces hallucination in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Table reconstruction&lt;/li&gt;
&lt;li&gt;RAG answer grounding&lt;/li&gt;
&lt;li&gt;Tool responses requiring valid JSON output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Benchmarks show improvements in exact match metrics and fewer malformed outputs when LLMs decode TOON vs JSON.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimized for Tokenizers Rather Than Parsers
&lt;/h3&gt;

&lt;p&gt;BPE and unigram tokenizers do not treat structural characters atomically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quotes often tokenize as &lt;code&gt;"&lt;/code&gt;, plus the first 1–2 characters of the key&lt;/li&gt;
&lt;li&gt;Braces become unique token fragments not reused elsewhere&lt;/li&gt;
&lt;li&gt;Repeated key names are repeatedly segmented across the prompt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TOON leverages linguistic token merging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alphanumeric keys tend to map to single tokens&lt;/li&gt;
&lt;li&gt;Indentation and line breaks fall into low-cost whitespace categories&lt;/li&gt;
&lt;li&gt;CSV-like patterns trigger high tokenizer reuse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example token comparison for a 100-row table:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JSON minified:&lt;/strong&gt; ~2,540 tokens&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TOON equivalent:&lt;/strong&gt; ~1,020 tokens&lt;/p&gt;

&lt;p&gt;Same semantics, radically different tokenization behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deterministic Round-Trip and Streaming Support
&lt;/h3&gt;

&lt;p&gt;The encoder is a pure transformation layer. It does not compress or interpret values. Decoding restores original JSON byte-for-byte (excluding whitespace variation in numbers and optional quotes).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp41ozj6sld1m9z1rcxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp41ozj6sld1m9z1rcxb.png" alt="Image4" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two primary APIs matter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;encodeLines&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@toon-format/toon&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;      &lt;span class="c1"&gt;// Buffer → TOON text&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;       &lt;span class="c1"&gt;// TOON text → JSON structure&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nf"&gt;encodeLines&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;largeData&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Suitable for incremental context injection in RAG&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ˆLarge structured payloads can stream without materializing entire documents in memory. This benefits contexts where prompts change on the fly, such as agent pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Designed to Fail Loudly, Not Silently
&lt;/h3&gt;

&lt;p&gt;JSON accepts constructs that can become fragile when interpreted by a model, missing commas, out-of-order fields, and trailing structure. Models sometimes look correct while outputting semantically broken JSON.&lt;/p&gt;

&lt;p&gt;TOON’s strict format makes deviations more observable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Misindentation breaks structural parse&lt;/li&gt;
&lt;li&gt;Mismatched row counts surface immediately&lt;/li&gt;
&lt;li&gt;Field order mismatch is an error, not a tolerated reordering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of debugging the LLM, the format itself catches the drift.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why These Choices Matter
&lt;/h3&gt;

&lt;p&gt;LLMs are probability engines, not parsers. They work best when the signal is strong and the requirements are explicit. TOON’s encoding strategy reduces the number of possible interpretations at every structural boundary, while reducing the token cost at the same time.&lt;/p&gt;

&lt;p&gt;It is not a new data model. It is simply a more model-literate representation of the one we already use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarks From Real Data
&lt;/h2&gt;

&lt;p&gt;The most honest way to judge a data format is to see how it performs when real pipelines and real models are involved. The TOON benchmark suite focuses on everyday workloads that developers already push into prompts: employee directories, order histories, analytics logs, configuration objects, and nested product catalogs. &lt;/p&gt;

&lt;p&gt;There are 209 structured extraction tasks in total. Testing covers four current model families: GPT 5 Nano, Gemini Flash, Claude Haiku, and Grok 4. Token counts are measured using the o200k base tokenizer so the results match real billing.&lt;/p&gt;

&lt;p&gt;Here is the average outcome across mixed data shapes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Format&lt;/th&gt;
&lt;th&gt;Accuracy&lt;/th&gt;
&lt;th&gt;Tokens&lt;/th&gt;
&lt;th&gt;Score*&lt;/th&gt;
&lt;th&gt;Savings vs JSON&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TOON&lt;/td&gt;
&lt;td&gt;73.9%&lt;/td&gt;
&lt;td&gt;2,744&lt;/td&gt;
&lt;td&gt;26.9&lt;/td&gt;
&lt;td&gt;39.6% fewer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSON compact&lt;/td&gt;
&lt;td&gt;70.7%&lt;/td&gt;
&lt;td&gt;3,081&lt;/td&gt;
&lt;td&gt;22.9&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;YAML&lt;/td&gt;
&lt;td&gt;69.0%&lt;/td&gt;
&lt;td&gt;3,719&lt;/td&gt;
&lt;td&gt;18.6&lt;/td&gt;
&lt;td&gt;N A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSON&lt;/td&gt;
&lt;td&gt;69.7%&lt;/td&gt;
&lt;td&gt;4,545&lt;/td&gt;
&lt;td&gt;15.3&lt;/td&gt;
&lt;td&gt;baseline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;XML&lt;/td&gt;
&lt;td&gt;67.1%&lt;/td&gt;
&lt;td&gt;5,167&lt;/td&gt;
&lt;td&gt;13.0&lt;/td&gt;
&lt;td&gt;N A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Score shows correct extractions per 1,000 input tokens. It is a direct value for cost metric.&lt;/p&gt;

&lt;p&gt;Uniform arrays show the biggest advantage. A 500 row e commerce orders dataset that required 11,842 tokens in JSON needed only 4,617 tokens in TOON. This represents a 61 percent reduction. At 1,000 GPT 4o prompts per day, that single workload saves roughly 1,740 dollars every month.&lt;/p&gt;

&lt;p&gt;Accuracy improves as well. GPT 5 Nano reconstruction tests rose from 92.5 percent to 99.4 percent. The explicit field alignment and declared row counts help the model avoid dropped or invented entries. Nothing about the underlying information changes. The model simply has less noise to interpret and more room in the context window for data that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Teams Use TOON in Production
&lt;/h2&gt;

&lt;p&gt;Adopting TOON rarely requires major changes. JSON remains the source of truth in databases and services. The only difference is that data is converted to TOON at the moment it becomes model input. This removes the token overhead that appears only in prompts, not in storage or APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceyn4b19n8lymrn9ufjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceyn4b19n8lymrn9ufjg.png" alt="Image5" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A typical retrieval augmented workflow looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;toon&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;encode&lt;/span&gt;
&lt;span class="n"&gt;records&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch_customers&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Answer using this context:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;records&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model reads TOON as structured text without special instruction. If the response needs to return to typed objects, the same library converts it back into JSON. This keeps the rest of the stack untouched.&lt;/p&gt;

&lt;p&gt;Agent systems also gain stability. When a tool returns a list of results, TOON’s explicit row counts and column order help the model avoid misalignment errors that would otherwise break the next step in the loop.&lt;/p&gt;

&lt;p&gt;Streaming pipelines benefit, too. Because TOON is line oriented, prompts can be built incrementally without waiting for closing braces or bracket completion. The result is faster handoffs from retrieval to inference.&lt;/p&gt;

&lt;h2&gt;
  
  
  When TOON Helps and When JSON Still Makes Sense
&lt;/h2&gt;

&lt;p&gt;TOON shows its strengths when models read large collections of records. In those prompts, much of the length comes from formatting rather than data. Removing that formatting gives the model the same information in a smaller space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuwrhwqjfmdbjn9a4f25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuwrhwqjfmdbjn9a4f25.png" alt="Image6" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some data does not benefit in the same way. Complex, irregular objects leave little structure that can be simplified, so token totals remain close to JSON. And outside of prompts, JSON continues to be a dependable standard for storage, APIs, and logging where token costs do not apply.&lt;/p&gt;

&lt;p&gt;The right approach is to test with your own payloads. Measure how many tokens the model actually sees and how reliably it can reconstruct results. TOON is most helpful where structure repeats predictably and cost pressure is high.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Formats usually reflect the problems they were built to solve. JSON was created when the goal was to move data between browsers and servers with as little friction as possible. Its punctuation and repetition are part of that success story.&lt;/p&gt;

&lt;p&gt;When that same format is aimed at a language model, the context changes. Models treat every character as a unit of computation, and punctuation becomes something they must process before they can reason about the information it describes. The result is more tokens consumed and less room for the details that matter.&lt;/p&gt;

&lt;p&gt;TOON takes the data we already rely on and presents it in a way that models can read with less effort. It removes structure that exists only for traditional parsers while keeping the meaning intact. That difference shows up quickly in token use, in latency, and in the accuracy of structured extraction.&lt;/p&gt;

&lt;p&gt;Better results without changing the data itself. That is the practical opportunity now in front of developers.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Gemini 3 is Now Available as an OCR Model in Tensorlake</title>
      <dc:creator>Diptanu Gon Choudhury</dc:creator>
      <pubDate>Mon, 08 Dec 2025 05:57:26 +0000</pubDate>
      <link>https://forem.com/tensorlake/gemini-3-is-now-available-as-an-ocr-model-in-tensorlake-4kfh</link>
      <guid>https://forem.com/tensorlake/gemini-3-is-now-available-as-an-ocr-model-in-tensorlake-4kfh</guid>
      <description>&lt;h2&gt;
  
  
  Gemini 3 is now available within Tensorlake
&lt;/h2&gt;

&lt;p&gt;Google’s Gemini model since 2.5 Flash has been great at Document Parsing. The latest Gemini 3 pushes the envelope even further. It has the lowest edit distance(0.115) on OmniDocBench compared to GPT-5.1(0.147) and Claude Sonnet 4.5.&lt;/p&gt;

&lt;p&gt;Starting today, you can start using Gemini as an OCR Engine with Tensorlake’s Document Ingestion API. You can ingest Documents in bulk, and convert them into Markdown, classify pages or extract structured data using JSON schema. Tensorlake will take care of queuing, working with rate limits and sending you webhooks as documents are processed.&lt;/p&gt;

&lt;p&gt;We put Gemini 3 to the test inside Tensorlake, and the results on "hostile" document layouts were immediate.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/F-xH1Sd0lNY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Case Study 1: Table Structure Recognition
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Document:&lt;/strong&gt; Google 2024 Environmental Report&lt;/p&gt;

&lt;p&gt;Financial and scientific reports use visual cues, like indentation, floating columns, and symbols, to convey meaning. To test this, we fed the complex "Water Use" table from the Appendix into Gemini 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsbowwtpq1flz0wbvt9i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsbowwtpq1flz0wbvt9i.webp" alt="Google environment report"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;The table is semi wireless - some lines separating some of the rows while the columns have no boundaries. The column on the right is disconnected to the main block.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Gemini 3 Result: Visual Understanding
&lt;/h3&gt;

&lt;p&gt;Gemini3 does a perfect job on understanding this table. This is a screenshot from the Tensorlake Cloud Dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsf42r5r3g4q9hafavqk1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsf42r5r3g4q9hafavqk1.webp" alt="Google environment result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Case Study 2: VQA + Structured Output
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Document:&lt;/strong&gt; House Floor Plans&lt;/p&gt;

&lt;p&gt;We wanted to test if Gemini 3 could parse visual symbols on construction documents. We fit Gemini3 into Tensorlake’s Structured Extraction pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Input:&lt;/strong&gt; A raw PDF of a house plan and a Pydantic schema defining the exact fields we needed (e.g., kitchen_outlets: int, description: Number of standard and GFI electrical outlets, as noted by the legend icon labeled "outlet", that are found in the kitchen and dining nook.).&lt;/p&gt;

&lt;p&gt;For reference, here is the kitchen+dining nook area.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw7cl55ylwa4k1s5hg9q.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw7cl55ylwa4k1s5hg9q.webp" alt="Kitchen Dining diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The circle with two lines are the outlets, as per the legend on the same page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxco46ha1mdqno3h6ijv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxco46ha1mdqno3h6ijv.webp" alt="Kitchen dining legend"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;There is no text label saying "Outlet" on the diagram, it is only associated with the symbol in the legend The model must identify the specific circle-and-line icon defined in the legend, spatially constrain its search to the visual boundaries of the "Kitchen," and aggregate the count into our JSON structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Result
&lt;/h3&gt;

&lt;p&gt;Gemini 3 successfully understood the visual diagram. It returned a valid JSON object with 6 outlets, correctly distinguishing them from nearby data ports and switches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6khfvkhypkheayphxnkk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6khfvkhypkheayphxnkk.webp" alt="Kitchen dining result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tensorlake blends specialized OCR models and VLMs into a set of convenient APIs. While you could call the Gemini API directly, you would be rebuilding many undifferentiated aspects of a production pipeline. Gemini 3 is now fully integrated with Tensorlake DocAI APIs to read, classify, and extract information from documents.&lt;/p&gt;

&lt;p&gt;Tensorlake solves the two biggest headaches of building Document Ingestion APIs using VLMs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bulk Ingestion &amp;amp; Rate Limits:&lt;/strong&gt; From our observation Gemini3 doesn’t handle spiky traffic very well. Throwing 10,000 documents at it will trigger errors due to strict quotas. Tensorlake manages the queue, handling back-off and retries automatically so you can ingest massive datasets without hitting 429 errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chunking Large Files:&lt;/strong&gt; Tensorlake automatically chunks large documents into chunks of 25 pages to make sure Gemini is able to extract even the most dense pages. We ensure that the output token limit of 64k is not exceeded.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  When to use (and NOT use) Gemini 3
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Gemini 3 when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complex Visual Reasoning is required:&lt;/strong&gt; You need to correlate a chart's color legend to a data table, or count symbols on a blueprint (as shown in the house plan example).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Do NOT use Gemini 3 when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You need bounding boxes for citation:&lt;/strong&gt; Gemini 3 does not perform layout detection of objects in documents. If your application requires strict Bounding Boxes to highlight exactly where a specific paragraph or number came from.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You need strict text style and font detection:&lt;/strong&gt; Visual nuances like strikethroughs, underlines, or specific font colors are often ignored by VLMs, which focus on the "content" rather than the style.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For these tasks, you should use one of Tensorlake’s specialized models, like Model03.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use Gemini 3 with Tensorlake
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Playground
&lt;/h3&gt;

&lt;p&gt;Gemini 3 is available today in the Tensorlake Playground for experimentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn5ocrfltuhczgjkc99b.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn5ocrfltuhczgjkc99b.webp" alt="Playground settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can select it with our HTTP API or SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorlake.documentai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DocumentAI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ParsingOptions&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DocumentAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;parse_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;file_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://tlake.link/docs/real-estate-agreement&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;parsing_options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ParsingOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;ocr_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parse_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Document Ingestion has a lot of edge cases. We want our users to always have access to state of the art models so that they can solve their use cases fairly quickly by changing various aspects of the OCR pipelines with very minimal code changes.&lt;/p&gt;

&lt;p&gt;We will add more Foundation Models as OCR model options in Tensorlake’s Document Ingestion API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tlake.link/cloud" rel="noopener noreferrer"&gt;Try Tensorlake free&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to discuss your specific use case?&lt;br&gt;
&lt;a href="https://tlake.link/chat" rel="noopener noreferrer"&gt;Schedule a technical demo&lt;/a&gt; with our team.&lt;/p&gt;

&lt;p&gt;Questions about the benchmark?&lt;br&gt;
&lt;a href="https://tlake.link/slack" rel="noopener noreferrer"&gt;Join our Slack community&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__3622204"&gt;
    &lt;a href="/diptanu_gonchoudhury_23e" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3622204%2F6e0be19c-56e6-49c4-a63f-8ab8cda7fe93.jpg" alt="diptanu_gonchoudhury_23e image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/diptanu_gonchoudhury_23e"&gt;Diptanu Gon Choudhury&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/diptanu_gonchoudhury_23e"&gt;/diptanu_gonchoudhury_23e&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





</description>
      <category>ai</category>
      <category>programming</category>
      <category>rag</category>
    </item>
    <item>
      <title>Benchmarking the Most Reliable Document Parsing API</title>
      <dc:creator>Sarah Guthals, PhD</dc:creator>
      <pubDate>Wed, 05 Nov 2025 17:37:05 +0000</pubDate>
      <link>https://forem.com/tensorlake/benchmarking-the-most-reliable-document-parsing-api-1mln</link>
      <guid>https://forem.com/tensorlake/benchmarking-the-most-reliable-document-parsing-api-1mln</guid>
      <description>&lt;p&gt;Document parsing is the foundation of enterprise AI applications. Whether you're building RAG pipelines, automating insurance claims, or extracting data from financial reports, everything starts with one question: &lt;strong&gt;Can you consistently transform messy, real-world documents into structured, machine-readable data?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our customers need the best document ingestion API for their use cases. They're comparing Azure, AWS Textract, popular open-source models like Docling and Marker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We built a benchmark that measures what matters: Can downstream systems actually use this output?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Measuring What Actually Matters
&lt;/h2&gt;

&lt;p&gt;Tensorlake both reads documents and extracts structured data, so when choosing what to measure accuracy with, we wanted to ensure we were measuring both document parsing with structural preservation and structured extraction for downstream usability.&lt;/p&gt;

&lt;p&gt;The aspects of Document Parsing that we wanted to measure were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tables:&lt;/strong&gt; Ensuring we can parse and measure accuracy of complex tables with merged cells and multi-row headers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reading Order:&lt;/strong&gt; In multi-column documents, and documents with complex layouts, we measure whether the reading order is preserved while parsing. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Extraction Accuracy:&lt;/strong&gt; Measuring direct downstream usability of extracted data. A small OCR error in parsing a table cell can cause failure in achieving the downstream task, while the overall accuracy of the OCR on the document may be high.&lt;/li&gt;
&lt;li&gt;Extraction of footnotes, formulas, figures and other non-textual content.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Our Evaluation Methodology
&lt;/h2&gt;

&lt;p&gt;We employ two metrics that better capture these features with real-world reliability:&lt;/p&gt;

&lt;h3&gt;
  
  
  TEDS (Tree Edit Distance Similarity)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Compares predicted and ground-truth Markdown/HTML tree structures&lt;/li&gt;
&lt;li&gt;Captures structural fidelity in tables and complex layouts&lt;/li&gt;
&lt;li&gt;Widely adopted in OCRBench v2 and OmniDocBench evaluations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Measures whether the document's logical structure and textual alignment remains intact&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TEDS answers: "Is this table still a table?" Not just "Is the text similar?"&lt;/p&gt;

&lt;h3&gt;
  
  
  JSON F1 (Field-Level Precision and Recall)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Compares extracted JSON against schema-based ground truth&lt;/li&gt;
&lt;li&gt;Precision measures correctness of extracted fields&lt;/li&gt;
&lt;li&gt;Recall measures completeness of required field capture&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;F1 score balances both for overall reliability assessment&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;JSON F1 answers: "Can downstream automation actually use this data?" Not just "Is some text present?"&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Together, these metrics answer the essential question: &lt;strong&gt;"Can downstream systems use this output?"&lt;/strong&gt; rather than simply "Is the text similar?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Stage 1: Document Reading Ability (OCR and Structural Preservation)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Each parsing model generates Markdown/HTML output. We evaluate using TEDS to measure how well structure is preserved; reading order, table integrity, and layout coherence. You can find our &lt;a href="https://tlake.link/benchmark-dataset" rel="noopener noreferrer"&gt;updated dataset published here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We use the public OCRBench v2 and OmniDocBench datasets. However, upon review, we identified inconsistencies in the published ground truth of OCRBench v2. We conducted a comprehensive audit and correction to ensure evaluation accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 2: Structured Extraction Accuracy (Downstream Usability)&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;We pass the Markdown through a standardized LLM (GPT-4o) with predefined JSON schemas, measuring JSON F1. This isolates how OCR quality impacts real extraction workflows, where an LLM interprets the parsed text.&lt;/p&gt;

&lt;p&gt;Initial JSON schemas and reference answers are generated using Gemini Pro 2.5, then human reviewers audit and correct them to ensure high-quality gold standards.&lt;/p&gt;

&lt;p&gt;This methodology ensures fair, reproducible comparisons by varying only the OCR models (Stage 1) while keeping the extraction model constant (Stage 2).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results: Public Dataset Performance
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Document Parsing Performance
&lt;/h3&gt;

&lt;p&gt;We evaluated leading open-source and proprietary models:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2iklmdxawivl0t9hebc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2iklmdxawivl0t9hebc.png" alt="Table showing table parsing accuracy on OmniDocBench dataset. Five models compared with TEDS scores and TEDS-Structure only scores: Docling (63.84%, 77.68%), Marker (57.88%, 71.17%), Azure (78.14%, 83.61%), Textract (80.75%, 88.78%), and Tensorlake highlighted in green (86.79%, 90.62%). Tensorlake achieves the highest scores in both categories." width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Findings:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tensorlake achieves the highest TEDS score, indicating superior structural preservation&lt;/li&gt;
&lt;li&gt;The gap between Docling and production-grade systems is substantial&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Table Parsing Performance
&lt;/h3&gt;

&lt;p&gt;We evaluated Tensorlake’s table parsing accuracy using the OmniDocBench dataset — a CVPR-accepted benchmark for comprehensive document understanding tasks (&lt;a href="https://github.com/opendatalab/OmniDocBench" rel="noopener noreferrer"&gt;GitHub link&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Table accuracy in OmniDocBench is quantified using a combination of tree-based and string-based metrics. In particular, we measured TEDS (Tree Edit Distance Similarity), which assesses both the structural and textual alignment between predicted and ground-truth HTML tables.&lt;/p&gt;

&lt;p&gt;To reproduce our results, generate Markdown outputs using the models listed below, then run the evaluation method provided in the OmniDocBench repository. We have used 512 document images with tables and v1.5 of the code version. Evaluation outputs are released in Huggingface(&lt;a href="https://huggingface.co/datasets/tensorlake/OmniDocBench-eval-outputs" rel="noopener noreferrer"&gt;link&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6byj9xutcea6ndczncau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6byj9xutcea6ndczncau.png" alt="Bar chart showing Table Parsing Task performance on OmniDocBench dataset measured by TEDS (Tree Edit Distance Similarity) score, where higher is better. Five models compared from left to right: Marker (57.88%), Docling (63.84%), Azure (78.14%), Textract (80.75%), and Tensorlake highlighted in green (86.79%). Tensorlake achieves the highest TEDS score, outperforming the next best competitor (Textract) by approximately 6 percentage points and leading open-source alternatives by over 20 percentage points." width="800" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;¹ &lt;em&gt;Marker's Number is from the officially published OmniDocBench repository.&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Findings:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On OmniDocBench's challenging tables, Tensorlake leads with 86.79% TEDS&lt;/li&gt;
&lt;li&gt;Open-source solutions struggle with table extraction (sub-70% TEDS)&lt;/li&gt;
&lt;li&gt;Tensorlake maintains table structure even on complex, multi-page tables&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance on Real World Enterprise Documents
&lt;/h2&gt;

&lt;p&gt;OCR Models are rarely trained on enterprise documents, because they are not publicly available. We wanted to test how well our model performs and others perform on these documents. &lt;/p&gt;

&lt;h3&gt;
  
  
  Enterprise Document Performance (100 pages)
&lt;/h3&gt;

&lt;p&gt;We curated 100 document pages spanning banking, retail, and insurance sectors. This represents real production workloads: invoices with water damage, scanned contracts with skewed text, bank statements with multi-level tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhoytm5zaibro9wadqpd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhoytm5zaibro9wadqpd8.png" alt="Bar chart showing Enterprise Document JSON Accuracy F1 scores. Six models compared: Docling (68.90%), Marker (83.30%), Azure (88.10%), Textract (88.40%), Gemini (89.00%), and Tensorlake highlighted in green (91.70%). Tensorlake achieves the highest accuracy, with approximately 5 more correctly extracted fields per 20 documents compared to the next best competitor." width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Findings:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tensorlake achieves 91.7% F1 with standard extraction, beating all competitors&lt;/li&gt;
&lt;li&gt;The difference between 91.7% and 68.9% F1 is massive: it’s &lt;strong&gt;5 extra&lt;/strong&gt; fields correctly extracted out of every 20&lt;/li&gt;
&lt;li&gt;In production workflows processing thousands of documents daily, this accuracy gap compounds into significant error reduction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But even comparing the higher F1 scores when parsing a standard form, Azure and Textract jumble the reading order and skip data completely, whereas Tensorlake preserves the complex reading order and groups data correctly and accurately:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyusjypydmgusuwc2ivr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyusjypydmgusuwc2ivr.png" alt="Comparison showing how different document parsing APIs handle a contract notice section. Original document at top shows buyer and seller information with addresses, phone numbers, and email addresses. Three parsed outputs below demonstrate failures: Textract (labeled with coral background) shows jumbled addresses and missing buyer information; Azure (labeled with blue background) shows jumbled addresses and missing parenthesis; Tensorlake (labeled with green background) preserves complex reading order with no missing data and accurate information. Key differences highlighted: competitors lose structure and omit critical fields, while Tensorlake maintains logical reading order and captures all information correctly." width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Delivering the Best Performance/Price Ratio
&lt;/h2&gt;

&lt;p&gt;Accuracy without affordability isn't practical. Here's how Tensorlake compares to other Document Ingestion APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.tensorlake.ai/pricing" rel="noopener noreferrer"&gt;Tensorlake&lt;/a&gt;: $10 per 1k pages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TEDS Score: &lt;strong&gt;86.79&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;F1 Score: &lt;strong&gt;91.7&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/pricing/details/ai-document-intelligence/" rel="noopener noreferrer"&gt;Azure&lt;/a&gt;: $10 per 1k pages&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TEDS Score: 78.14&lt;/li&gt;
&lt;li&gt;F1 Score: 88.1&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/textract/pricing/" rel="noopener noreferrer"&gt;AWS Textract&lt;/a&gt;: $15 per 1k pages&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TEDS Score: 80.75&lt;/li&gt;
&lt;li&gt;F1 Score: 88.4&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tensorlake delivers the highest accuracy than both Azure and AWS Textract, matching Azure's cost while AWS Textract is 50% more expensive.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Take the Next Step
&lt;/h2&gt;

&lt;p&gt;When your business depends on accurate document processing, you can't afford to use anything less.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tlake.link/cloud" rel="noopener noreferrer"&gt;Try Tensorlake free&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to discuss your specific use case?  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://tlake.link/chat" rel="noopener noreferrer"&gt;Schedule a technical demo&lt;/a&gt; with our team.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Questions about the benchmark?  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tlake.link/slack" rel="noopener noreferrer"&gt;Join our Slack community&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>rag</category>
      <category>ai</category>
      <category>performance</category>
    </item>
    <item>
      <title>New: Vision Language Models for Document Processing</title>
      <dc:creator>Sarah Guthals, PhD</dc:creator>
      <pubDate>Thu, 16 Oct 2025 20:52:08 +0000</pubDate>
      <link>https://forem.com/tensorlake/new-vision-language-models-for-document-processing-3fdm</link>
      <guid>https://forem.com/tensorlake/new-vision-language-models-for-document-processing-3fdm</guid>
      <description>&lt;p&gt;We've expanded our use of Vision Language Models (VLMs) across multiple DocumentAI features for faster and more accurate document processing on documents with hundreds of pages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Page Classification&lt;/strong&gt;: Identify relevant pages in large documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Figure and Table Summarization&lt;/strong&gt;: Extract insights from visual elements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Extraction (with &lt;code&gt;skip_ocr&lt;/code&gt;)&lt;/strong&gt;: Direct visual understanding for more accurate extraction on harder to parse documents (e.g. scanned documents, engineering diagrams, or documents with complex reading order)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post focuses on our enhanced page classification capabilities for demonstration. With VLM support, you can quickly process large documents by identifying and extracting from only relevant pages.&lt;/p&gt;

&lt;p&gt;Try it in this &lt;a href="https://tlake.link/notebooks/vlm-parsing" rel="noopener noreferrer"&gt;Colab Notebook&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Improvements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Scale &amp;amp; Performance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Handle Large Documents&lt;/strong&gt;: Classify documents with hundreds of pages without performance degradation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VLM-Powered Classification&lt;/strong&gt;: Replaced OCR with Vision Language Models for faster, more accurate classification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selective Processing&lt;/strong&gt;: Only parse pages that matter, reducing processing time and costs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recommended Workflow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Classify First&lt;/strong&gt;: Use the &lt;code&gt;classify&lt;/code&gt; endpoint to identify relevant pages based on your criteria&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parse Selectively&lt;/strong&gt;: Set &lt;code&gt;page_range&lt;/code&gt; to only process the classified relevant pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extract Efficiently&lt;/strong&gt;: Apply structured extraction only to pages containing the information you need&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Use Case Example: SEC Filings Analysis
&lt;/h2&gt;

&lt;p&gt;This approach is particularly powerful for extracting specific information from lengthy documents like SEC filings. For example, when analyzing cryptocurrency holdings across multiple companies' 10-K and 10-Q reports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Challenge&lt;/strong&gt;: Each filing can be 100-200+ pages, but crypto-related information might only appear on 10-20 pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: First classify pages containing "digital assets holdings", then extract structured data only from those pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt;: 80-90% reduction in processing time and more focused, accurate extractions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensorlake.documentai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DocumentAI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;PageClassConfig&lt;/span&gt;

&lt;span class="n"&gt;doc_ai&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DocumentAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Step 1: Classify pages
&lt;/span&gt;&lt;span class="n"&gt;page_classifications&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nc"&gt;PageClassConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;digital_assets_holdings&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pages showing cryptocurrency holdings on balance sheet...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;parse_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc_ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;classify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;file_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;filing_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;page_classifications&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;page_classifications&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc_ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait_for_completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parse_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;parse_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 2: Parse only relevant pages
&lt;/span&gt;&lt;span class="n"&gt;relevant_pages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;page_numbers&lt;/span&gt;
&lt;span class="n"&gt;page_range&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;relevant_pages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;final_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc_ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse_and_wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;filing_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;page_range&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;page_range&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;structured_extraction_options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[...]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency&lt;/strong&gt;: Process only what you need&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: Reduce processing time by focusing on relevant content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt;: VLM classification provides better understanding of page content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Handle large document sets without compromising performance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;Check out our &lt;a href="https://tlake.link/notebooks/vlm-parsing" rel="noopener noreferrer"&gt;example notebook&lt;/a&gt; demonstrating how to extract cryptocurrency metrics from SEC filings using the new classification approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Update to the latest version of Tensorlake:&lt;br&gt;
&lt;code&gt;pip install --upgrade tensorlake&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then start classifying, summarizing, and extracting with improved efficiency!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
