<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: John</title>
    <description>The latest articles on Forem by John (@johns23424234324234).</description>
    <link>https://forem.com/johns23424234324234</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/johns23424234324234"/>
    <language>en</language>
    <item>
      <title>Before I blame the model, I check the token trail</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 23:00:26 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/before-i-blame-the-model-i-check-the-token-trail-5f9p</link>
      <guid>https://forem.com/johns23424234324234/before-i-blame-the-model-i-check-the-token-trail-5f9p</guid>
      <description>&lt;h1&gt;
  
  
  Before I blame the model, I check the token trail
&lt;/h1&gt;

&lt;p&gt;One thing I have learned from building with LLMs every day is that a bad session usually does not start with a bad answer.&lt;/p&gt;

&lt;p&gt;It starts with drift.&lt;/p&gt;

&lt;p&gt;A little too much old context.&lt;br&gt;
A little too much tool output carried forward.&lt;br&gt;
A little too much reluctance to restart a chat that is already getting muddy.&lt;/p&gt;

&lt;p&gt;When that happens, I usually feel the workflow get worse before I notice the cost.&lt;/p&gt;

&lt;p&gt;The model feels slower.&lt;br&gt;
The answers get less sharp.&lt;br&gt;
I start rewriting prompts that were not really the problem.&lt;/p&gt;

&lt;p&gt;For a while I treated this as a prompting problem.&lt;br&gt;
Then I realized I was missing a live signal.&lt;/p&gt;

&lt;p&gt;Most token dashboards are useful after the fact. They tell you what happened once the session is already over.&lt;br&gt;
That is helpful for reporting, but not for behavior.&lt;/p&gt;

&lt;p&gt;I wanted something visible while I was actually working.&lt;br&gt;
So I built &lt;strong&gt;TokenBar&lt;/strong&gt;, a macOS menu bar app that shows live token usage during LLM sessions.&lt;/p&gt;

&lt;p&gt;That one change made a few habits much clearer for me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;restart chats sooner when context starts dragging&lt;/li&gt;
&lt;li&gt;summarize instead of carrying full tool traces forward&lt;/li&gt;
&lt;li&gt;stay on smaller models longer when the task does not need more&lt;/li&gt;
&lt;li&gt;notice when a workflow is getting sloppy before the bill shows up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not a magic optimizer.&lt;br&gt;
It just makes token usage hard to ignore in the moment.&lt;/p&gt;

&lt;p&gt;That has been more useful for me than any postmortem chart.&lt;br&gt;
Because by the time I am looking at a chart, the messy workflow already happened.&lt;/p&gt;

&lt;p&gt;If you are building with AI all day, I think live visibility changes behavior faster than after the fact spend reports.&lt;/p&gt;

&lt;p&gt;TokenBar is here if you want to try it: &lt;a href="https://tokenbar.site/" rel="noopener noreferrer"&gt;https://tokenbar.site/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I built TokenBar after realizing prompt bloat is easier to ignore than fix</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 20:36:08 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/i-built-tokenbar-after-realizing-prompt-bloat-is-easier-to-ignore-than-fix-4a48</link>
      <guid>https://forem.com/johns23424234324234/i-built-tokenbar-after-realizing-prompt-bloat-is-easier-to-ignore-than-fix-4a48</guid>
      <description>&lt;p&gt;A lot of AI cost advice starts too late.&lt;/p&gt;

&lt;p&gt;People notice the bill.&lt;br&gt;
Then they start asking what went wrong.&lt;/p&gt;

&lt;p&gt;While building with AI tools every day, I kept running into a more annoying reality.&lt;br&gt;
The expensive part usually happened earlier, when nothing looked obviously broken.&lt;/p&gt;

&lt;p&gt;The prompt still worked.&lt;br&gt;
The response still came back.&lt;br&gt;
The tool still felt productive.&lt;/p&gt;

&lt;p&gt;But the context had quietly gotten fatter.&lt;br&gt;
The retries had started piling up.&lt;br&gt;
The lazy copy-paste habit had turned one reasonable workflow into a noisy expensive one.&lt;/p&gt;

&lt;p&gt;That was the moment I started caring less about dashboards and more about live visibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt bloat does not feel urgent in the moment
&lt;/h2&gt;

&lt;p&gt;That is the trap.&lt;/p&gt;

&lt;p&gt;Bad AI spend rarely arrives like a dramatic production outage.&lt;br&gt;
It usually shows up as a hundred tiny decisions that all feel harmless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keep the old context in case it helps&lt;/li&gt;
&lt;li&gt;paste one more block of docs&lt;/li&gt;
&lt;li&gt;retry without changing much&lt;/li&gt;
&lt;li&gt;switch to a bigger model because it is faster&lt;/li&gt;
&lt;li&gt;leave a long session running because cleaning it up feels annoying&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of those decisions feels serious on its own.&lt;br&gt;
Together, they create a workflow that gets slower, messier, and more expensive without sending a strong enough signal to stop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I built TokenBar
&lt;/h2&gt;

&lt;p&gt;I wanted live token visibility in the macOS menu bar.&lt;/p&gt;

&lt;p&gt;Not another full analytics ritual.&lt;br&gt;
Just a constant honest read on usage, reset windows, credits, and pace across the tools I actually use.&lt;/p&gt;

&lt;p&gt;The goal was simple:&lt;br&gt;
make it harder to stay blind while a workflow gets more expensive than it should be.&lt;/p&gt;

&lt;p&gt;If you want to check it out, TokenBar is here:&lt;br&gt;
&lt;a href="https://tokenbar.site/" rel="noopener noreferrer"&gt;https://tokenbar.site/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is $5 lifetime.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>discuss</category>
    </item>
    <item>
      <title>The hardest food log is not the wrong one. It is the mixed one.</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 20:06:44 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/the-hardest-food-log-is-not-the-wrong-one-it-is-the-mixed-one-9a</link>
      <guid>https://forem.com/johns23424234324234/the-hardest-food-log-is-not-the-wrong-one-it-is-the-mixed-one-9a</guid>
      <description>&lt;p&gt;When people demo AI food logging apps, they usually test the easy meal.&lt;/p&gt;

&lt;p&gt;A clean plate.&lt;br&gt;
A simple label.&lt;br&gt;
A perfect photo.&lt;/p&gt;

&lt;p&gt;That is not where the real product gets judged.&lt;/p&gt;

&lt;p&gt;The real test is the messy meal with three things on one plate.&lt;br&gt;
The rice is obvious.&lt;br&gt;
The chicken is close.&lt;br&gt;
The sauce throws everything off.&lt;br&gt;
Now the user has to decide whether fixing it is worth the effort.&lt;/p&gt;

&lt;p&gt;That moment matters more than a polished demo.&lt;/p&gt;

&lt;p&gt;While building MetricSync, I stopped treating first-pass accuracy like the whole product.&lt;br&gt;
If the first estimate is a little off, the product still works if correction is fast.&lt;br&gt;
If correction is annoying, the meal is gone.&lt;/p&gt;

&lt;p&gt;That changed how I built it.&lt;/p&gt;

&lt;p&gt;MetricSync lets people log food by photo, barcode, or text because real life is inconsistent.&lt;br&gt;
Then the correction loop has to be quick, because AI will not nail every restaurant plate or mixed meal on the first shot.&lt;/p&gt;

&lt;p&gt;I also priced it at $5/month with a 3 day free trial.&lt;br&gt;
This should feel easier to try and easier to keep using than a heavy subscription, especially when other apps like CalAI cost more.&lt;/p&gt;

&lt;p&gt;The thing I care about most is not showing off one perfect estimate.&lt;br&gt;
It is helping someone keep logging when lunch is rushed, dinner is messy, and the first answer needs a quick fix.&lt;/p&gt;

&lt;p&gt;That feels a lot closer to how people actually eat.&lt;/p&gt;

&lt;p&gt;If you are building habit software, I think this is an underrated question:&lt;br&gt;
What happens right after the product is slightly wrong?&lt;/p&gt;

&lt;p&gt;That recovery moment is where trust usually gets won or lost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.metricsync.download/" rel="noopener noreferrer"&gt;https://www.metricsync.download/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>If an AI food log takes longer than 10 seconds, I lose the meal</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 16:06:07 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/if-an-ai-food-log-takes-longer-than-10-seconds-i-lose-the-meal-2eli</link>
      <guid>https://forem.com/johns23424234324234/if-an-ai-food-log-takes-longer-than-10-seconds-i-lose-the-meal-2eli</guid>
      <description>&lt;p&gt;I keep seeing AI nutrition apps talk about accuracy like users are sitting down to audit every plate.&lt;/p&gt;

&lt;p&gt;That is not how I eat, and it is not how I am building MetricSync.&lt;/p&gt;

&lt;p&gt;The real deadline is attention.&lt;/p&gt;

&lt;p&gt;If logging lunch takes more than about 10 seconds, I already know what happens next. I tell myself I will fix it later. Later usually means never.&lt;/p&gt;

&lt;p&gt;That changed what I prioritize in MetricSync:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;photo input when that is fastest&lt;/li&gt;
&lt;li&gt;barcode when the package is right there&lt;/li&gt;
&lt;li&gt;text when typing beats taking a picture&lt;/li&gt;
&lt;li&gt;quick correction when the first estimate is close but not perfect&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am not trying to make food logging feel impressive.&lt;br&gt;
I am trying to make it easy to do again at 2 PM, 7 PM, and on the messy days.&lt;/p&gt;

&lt;p&gt;That is also why I keep the pricing simple: $5/month with a 3 day free trial. If this is going to become a habit, it should feel more like a utility than a commitment you debate every week.&lt;/p&gt;

&lt;p&gt;MetricSync is cheaper than CalAI, but the bigger point is this:&lt;/p&gt;

&lt;p&gt;An AI food logger does not earn trust when it wins a demo.&lt;br&gt;
It earns trust when it helps you log the meal you almost skipped.&lt;/p&gt;

&lt;p&gt;If you want to try it:&lt;br&gt;
&lt;a href="https://www.metricsync.download/" rel="noopener noreferrer"&gt;https://www.metricsync.download/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The hidden token leak in AI workflows is not your prompt</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 14:58:44 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/the-hidden-token-leak-in-ai-workflows-is-not-your-prompt-4l9d</link>
      <guid>https://forem.com/johns23424234324234/the-hidden-token-leak-in-ai-workflows-is-not-your-prompt-4l9d</guid>
      <description>&lt;p&gt;Most AI cost talk still focuses on the prompt.&lt;/p&gt;

&lt;p&gt;That is only part of the bill.&lt;/p&gt;

&lt;p&gt;What kept burning tokens for me was everything around the prompt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;old context I should have trimmed&lt;/li&gt;
&lt;li&gt;tool output I kept dragging forward&lt;/li&gt;
&lt;li&gt;retrieval chunks that stopped being useful 20 minutes ago&lt;/li&gt;
&lt;li&gt;switching to a bigger model before the task actually needed it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The annoying part is that none of this feels expensive in the moment.&lt;/p&gt;

&lt;p&gt;A session just gets a little messier.&lt;br&gt;
A little slower.&lt;br&gt;
A little harder to reason about.&lt;br&gt;
Then the token count quietly runs up.&lt;/p&gt;

&lt;p&gt;That is why I built TokenBar.&lt;/p&gt;

&lt;p&gt;It sits in the macOS menu bar and shows live token usage while I work with LLMs. Not after the session. During it.&lt;/p&gt;

&lt;p&gt;That changes behavior faster than a dashboard ever did for me.&lt;/p&gt;

&lt;p&gt;When I can see token usage climbing in real time, I am more likely to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cut dead context&lt;/li&gt;
&lt;li&gt;restart a bloated thread&lt;/li&gt;
&lt;li&gt;stay on a smaller model longer&lt;/li&gt;
&lt;li&gt;stop carrying tool traces that are no longer helping&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For me, the first win was not even cost.&lt;br&gt;
It was cleaner AI workflows.&lt;/p&gt;

&lt;p&gt;If you are building with LLMs all day, that live feedback loop matters more than another after the fact report.&lt;/p&gt;

&lt;p&gt;TokenBar: &lt;a href="https://tokenbar.site/" rel="noopener noreferrer"&gt;https://tokenbar.site/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The real problem with AI food logging is the skipped meal</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 12:06:37 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/the-real-problem-with-ai-food-logging-is-the-skipped-meal-3p0h</link>
      <guid>https://forem.com/johns23424234324234/the-real-problem-with-ai-food-logging-is-the-skipped-meal-3p0h</guid>
      <description>&lt;p&gt;I kept seeing the same failure mode while building MetricSync.&lt;/p&gt;

&lt;p&gt;Most food logging demos assume the user is calm, has a perfect plate photo, and wants to inspect macros like a spreadsheet.&lt;/p&gt;

&lt;p&gt;Real life is the opposite.&lt;br&gt;
You are in line for coffee.&lt;br&gt;
You ate half a sandwich in the car.&lt;br&gt;
Lunch is leftovers in a random container.&lt;br&gt;
You want the log done before the next notification hits.&lt;/p&gt;

&lt;p&gt;That changed how I think about the product.&lt;br&gt;
The biggest enemy is not a slightly wrong calorie estimate.&lt;br&gt;
It is the skipped meal.&lt;/p&gt;

&lt;p&gt;If logging feels slow, users do not correct the result.&lt;br&gt;
They just close the app and promise themselves they will log it later.&lt;br&gt;
Later usually means never.&lt;/p&gt;

&lt;p&gt;So I built MetricSync around low-friction recovery:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;log with a photo, barcode, or text&lt;/li&gt;
&lt;li&gt;fix the parts AI got wrong quickly&lt;/li&gt;
&lt;li&gt;keep the price low enough to feel like a utility at $5/month with a 3 day free trial&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I would rather help someone log an imperfect lunch in 10 seconds than make them admire a perfect demo they never use.&lt;/p&gt;

&lt;p&gt;That is the bar I am building against.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.metricsync.download/" rel="noopener noreferrer"&gt;https://www.metricsync.download/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why I priced MetricSync at $5/month while building an AI food logger</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 08:07:01 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/why-i-priced-metricsync-at-5month-while-building-an-ai-food-logger-12ko</link>
      <guid>https://forem.com/johns23424234324234/why-i-priced-metricsync-at-5month-while-building-an-ai-food-logger-12ko</guid>
      <description>&lt;p&gt;I kept seeing AI calorie apps priced like premium subscriptions.&lt;/p&gt;

&lt;p&gt;That felt backwards to me.&lt;/p&gt;

&lt;p&gt;Food logging is a daily habit product. If the price feels heavy, people do not just cancel eventually. They hesitate to start, skip days, and drop the habit before the product has a chance to help.&lt;/p&gt;

&lt;p&gt;That shaped how I built MetricSync.&lt;/p&gt;

&lt;p&gt;A few things felt important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keep the monthly price low enough to feel like a utility, not a commitment&lt;/li&gt;
&lt;li&gt;let people test it with a 3 day free trial before asking for money&lt;/li&gt;
&lt;li&gt;reduce logging friction with photo, barcode, or text input instead of forcing one perfect workflow&lt;/li&gt;
&lt;li&gt;make corrections fast when the first AI guess is a little off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I priced MetricSync at $5/month.&lt;/p&gt;

&lt;p&gt;It is also cheaper than CalAI, which mattered to me because this category should earn trust through consistency, not through pricing pressure.&lt;/p&gt;

&lt;p&gt;I am still early, still learning, and still tweaking the product, but I feel pretty strongly about this part: habit apps need room to become habits.&lt;/p&gt;

&lt;p&gt;If you want to check it out, MetricSync is here: &lt;a href="https://www.metricsync.download/" rel="noopener noreferrer"&gt;https://www.metricsync.download/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is iPhone only right now.&lt;/p&gt;

</description>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>I built TokenBar after realizing prompt bloat is easier to ignore than fix</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 01:29:49 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/i-built-tokenbar-after-realizing-prompt-bloat-is-easier-to-ignore-than-fix-4dcf</link>
      <guid>https://forem.com/johns23424234324234/i-built-tokenbar-after-realizing-prompt-bloat-is-easier-to-ignore-than-fix-4dcf</guid>
      <description>&lt;h1&gt;
  
  
  I built TokenBar after realizing prompt bloat is easier to ignore than fix
&lt;/h1&gt;

&lt;p&gt;A lot of AI cost advice starts too late.&lt;/p&gt;

&lt;p&gt;People notice the bill.&lt;br&gt;
Then they start asking what went wrong.&lt;/p&gt;

&lt;p&gt;While building with AI tools every day, I kept running into a more annoying reality.&lt;br&gt;
The expensive part usually happened earlier, when nothing looked obviously broken.&lt;/p&gt;

&lt;p&gt;The prompt still worked.&lt;br&gt;
The response still came back.&lt;br&gt;
The tool still felt productive.&lt;/p&gt;

&lt;p&gt;But the context had quietly gotten fatter.&lt;br&gt;
The retries had started piling up.&lt;br&gt;
The lazy copy-paste habit had turned one reasonable workflow into a noisy expensive one.&lt;/p&gt;

&lt;p&gt;That was the moment I started caring less about dashboards and more about live visibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt bloat does not feel urgent in the moment
&lt;/h2&gt;

&lt;p&gt;That is the trap.&lt;/p&gt;

&lt;p&gt;Bad AI spend rarely arrives like a dramatic production outage.&lt;br&gt;
It usually shows up as a hundred tiny decisions that all feel harmless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keep the old context in case it helps&lt;/li&gt;
&lt;li&gt;paste one more block of docs&lt;/li&gt;
&lt;li&gt;retry without changing much&lt;/li&gt;
&lt;li&gt;switch to a bigger model because it is faster&lt;/li&gt;
&lt;li&gt;leave a long session running because cleaning it up feels annoying&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of those decisions feels serious on its own.&lt;br&gt;
Together, they create a workflow that gets slower, messier, and more expensive without sending a strong enough signal to stop.&lt;/p&gt;

&lt;p&gt;I think that is why so many developers end up talking about AI bills like they were random weather.&lt;br&gt;
They were not random.&lt;br&gt;
They were just not visible enough while the decisions were happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  The useful signal is pace, not just total cost
&lt;/h2&gt;

&lt;p&gt;One thing building TokenBar changed for me is how I think about feedback.&lt;/p&gt;

&lt;p&gt;I do not only want to know how much I spent.&lt;br&gt;
I want to know how fast I am burning through the window I am in.&lt;/p&gt;

&lt;p&gt;That matters because total usage by itself is too abstract.&lt;br&gt;
Pace tells you whether the current workflow is healthy.&lt;/p&gt;

&lt;p&gt;If a session suddenly starts chewing through tokens faster than usual, that is a product signal.&lt;br&gt;
Maybe the context is bloated.&lt;br&gt;
Maybe the prompt structure is sloppy.&lt;br&gt;
Maybe I am brute-forcing a task that should be split up.&lt;br&gt;
Maybe I am just tired and using the expensive model as a substitute for thinking clearly.&lt;/p&gt;

&lt;p&gt;That is the kind of signal I wanted in front of me while I work.&lt;br&gt;
Not three tabs away.&lt;br&gt;
Not at the end of the week.&lt;br&gt;
Not after the invoice lands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I built TokenBar
&lt;/h2&gt;

&lt;p&gt;I built TokenBar because I wanted live token visibility in the macOS menu bar.&lt;/p&gt;

&lt;p&gt;Not another full analytics ritual.&lt;br&gt;
Just a constant honest read on usage, reset windows, credits, and pace across the tools I actually use.&lt;/p&gt;

&lt;p&gt;The goal was simple:&lt;br&gt;
make it harder to stay blind while a workflow gets more expensive than it should be.&lt;/p&gt;

&lt;p&gt;That is also why I kept the product small and local-first.&lt;br&gt;
I did not want cost visibility to become another heavy platform with its own setup burden.&lt;br&gt;
I wanted something that could sit quietly in the background and still change behavior at the right moment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The product lesson underneath this
&lt;/h2&gt;

&lt;p&gt;The more I build small utility software, the more I think the best products do not just measure a problem.&lt;br&gt;
They interrupt it early enough to matter.&lt;/p&gt;

&lt;p&gt;For AI spend, the interruption point is not the billing page.&lt;br&gt;
It is the moment you are about to keep pushing a workflow that is already drifting.&lt;/p&gt;

&lt;p&gt;That is what TokenBar is for.&lt;br&gt;
A simple macOS menu bar app that helps you catch token bloat before it hardens into habit.&lt;/p&gt;

&lt;p&gt;If you want to check it out, TokenBar is here:&lt;br&gt;
&lt;a href="https://tokenbar.site/" rel="noopener noreferrer"&gt;https://tokenbar.site/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is $5 lifetime.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I built TokenBar because LLM costs were too easy to ignore</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sat, 09 May 2026 00:11:22 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/i-built-tokenbar-because-llm-costs-were-too-easy-to-ignore-5gi7</link>
      <guid>https://forem.com/johns23424234324234/i-built-tokenbar-because-llm-costs-were-too-easy-to-ignore-5gi7</guid>
      <description>&lt;p&gt;If you build with LLMs every day, token usage turns into background noise until the bill shows up.&lt;/p&gt;

&lt;p&gt;I wanted one thing: live visibility without opening another dashboard.&lt;/p&gt;

&lt;p&gt;So I built TokenBar.&lt;/p&gt;

&lt;p&gt;It sits in the macOS menu bar and shows LLM token usage in real time.&lt;/p&gt;

&lt;p&gt;The goal is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;see usage while you work&lt;/li&gt;
&lt;li&gt;spot expensive prompts earlier&lt;/li&gt;
&lt;li&gt;keep cost visible for AI builder workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TokenBar is $5 lifetime: &lt;a href="https://tokenbar.site/" rel="noopener noreferrer"&gt;https://tokenbar.site/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Still early, so I am watching how people actually use it and where cost visibility breaks down.&lt;/p&gt;

&lt;p&gt;If you build with AI every day, what would you want visible all the time?&lt;/p&gt;

</description>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Why most AI nutrition app demos miss the real accuracy problem</title>
      <dc:creator>John</dc:creator>
      <pubDate>Fri, 08 May 2026 06:56:56 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/why-most-ai-nutrition-app-demos-miss-the-real-accuracy-problem-2j5b</link>
      <guid>https://forem.com/johns23424234324234/why-most-ai-nutrition-app-demos-miss-the-real-accuracy-problem-2j5b</guid>
      <description>&lt;p&gt;Most AI nutrition app demos use clean, obvious meals.&lt;/p&gt;

&lt;p&gt;That is not where the accuracy problem shows up.&lt;/p&gt;

&lt;p&gt;The real test is the stuff people actually eat on a Tuesday night:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;leftovers&lt;/li&gt;
&lt;li&gt;mixed bowls&lt;/li&gt;
&lt;li&gt;takeout with swaps&lt;/li&gt;
&lt;li&gt;half-finished plates&lt;/li&gt;
&lt;li&gt;quick text logs when they are too busy to take a perfect photo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have been building MetricSync around that messier reality.&lt;/p&gt;

&lt;p&gt;The product angle is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cheaper than CalAI&lt;/li&gt;
&lt;li&gt;more features&lt;/li&gt;
&lt;li&gt;better accuracy on normal meals, not just demo-friendly ones&lt;/li&gt;
&lt;li&gt;3 day free trial so people can test it on their own food instead of trusting a landing page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building in health or consumer AI, I think this category is going to be won by correction speed and consistency, not by the prettiest screenshot.&lt;/p&gt;

&lt;p&gt;MetricSync: &lt;a href="https://www.metricsync.download" rel="noopener noreferrer"&gt;https://www.metricsync.download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious what other founders have learned from shipping products where the hardest part is handling messy real-world input.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>I tested MetricSync against CalAI for fast nutrition logging</title>
      <dc:creator>John</dc:creator>
      <pubDate>Sun, 03 May 2026 20:56:51 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/i-tested-metricsync-against-calai-for-fast-nutrition-logging-2do4</link>
      <guid>https://forem.com/johns23424234324234/i-tested-metricsync-against-calai-for-fast-nutrition-logging-2do4</guid>
      <description>&lt;p&gt;I’ve been testing AI nutrition trackers and MetricSync surprised me.&lt;/p&gt;

&lt;p&gt;If you’re comparing it to CalAI, the short version is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it’s cheaper&lt;/li&gt;
&lt;li&gt;it has more features in the app right now&lt;/li&gt;
&lt;li&gt;its food recognition has been more accurate in my testing&lt;/li&gt;
&lt;li&gt;there’s a 3 day free trial, so it’s easy to sanity check for yourself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What stood out to me is that it feels built for people who actually want to log consistently, not just try the camera gimmick once and bounce. The flow is quick, and the results were more reliable on mixed meals than I expected.&lt;/p&gt;

&lt;p&gt;If you’re already paying for CalAI or considering it, MetricSync is worth a real side by side test before you choose.&lt;/p&gt;

&lt;p&gt;Site: &lt;a href="https://www.metricsync.download" rel="noopener noreferrer"&gt;https://www.metricsync.download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious what other people here are using for AI calorie tracking right now.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I built an AI nutrition tracker because Cal AI still felt too expensive for what it does</title>
      <dc:creator>John</dc:creator>
      <pubDate>Fri, 01 May 2026 04:58:15 +0000</pubDate>
      <link>https://forem.com/johns23424234324234/i-built-an-ai-nutrition-tracker-because-cal-ai-still-felt-too-expensive-for-what-it-does-30f3</link>
      <guid>https://forem.com/johns23424234324234/i-built-an-ai-nutrition-tracker-because-cal-ai-still-felt-too-expensive-for-what-it-does-30f3</guid>
      <description>&lt;p&gt;Most AI calorie trackers are selling the same promise: take a photo, get your macros, move on.&lt;/p&gt;

&lt;p&gt;I kept running into the same problem though. If you want something like Cal AI, you usually end up paying a lot for a pretty narrow feature set.&lt;/p&gt;

&lt;p&gt;So I built MetricSync for the kind of person who actually logs food every day and notices when the details are off.&lt;/p&gt;

&lt;p&gt;A few things I focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cheaper than Cal AI&lt;/li&gt;
&lt;li&gt;more features beyond just quick photo logging&lt;/li&gt;
&lt;li&gt;better accuracy when meals are messy or mixed&lt;/li&gt;
&lt;li&gt;a 3 day free trial so people can test it before paying&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is iPhone only right now.&lt;/p&gt;

&lt;p&gt;If you are actively trying different AI nutrition trackers, I would genuinely love to know what still feels broken in this category.&lt;/p&gt;

&lt;p&gt;MetricSync: &lt;a href="https://www.metricsync.download" rel="noopener noreferrer"&gt;https://www.metricsync.download&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
