<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Arlej</title>
    <description>The latest articles on Forem by Arlej (@arlejtech).</description>
    <link>https://forem.com/arlejtech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/arlejtech"/>
    <language>en</language>
    <item>
      <title>What happens when AI tells you the code is fine but your gut says it isn't?</title>
      <dc:creator>Arlej</dc:creator>
      <pubDate>Wed, 13 May 2026 09:06:03 +0000</pubDate>
      <link>https://forem.com/arlejtech/what-happens-when-ai-tells-you-the-code-is-fine-but-your-gut-says-it-isnt-1g2m</link>
      <guid>https://forem.com/arlejtech/what-happens-when-ai-tells-you-the-code-is-fine-but-your-gut-says-it-isnt-1g2m</guid>
      <description>&lt;p&gt;Halfway through building my audio synthesis engine, I hit a specific wall. Something sounded wrong. Not broken — just slightly off in a way I couldn't immediately point at. I asked AI to review the logic. It told me everything looked correct. I almost moved on.&lt;br&gt;
I didn't. I kept pulling at it. Turned out the logic was wrong in a way that only becomes obvious when you understand what the code is supposed to be physically modelling. No test was going to catch it. AI had no way to hear what I was hearing. And if I'd trusted the output over my instinct, the whole synthesis engine would have been quietly wrong from that point forward.&lt;br&gt;
That experience shaped how I finished the build. But I'm getting ahead of myself.&lt;/p&gt;

&lt;p&gt;What I was actually building&lt;br&gt;
I've been tracking HRV and sleep for a couple of years. Every morning I wake up to five numbers: heart rate variability, resting heart rate, deep sleep %, REM %, stress score. I got pretty good at reading them. What I couldn't figure out was what to actually do with them before sitting down to work or trying to sleep.&lt;br&gt;
The obvious answer was soundscapes. Binaural beats. That whole world. But every app I tried was the same — you pick "focus" or "relax" and it plays the same thing every single time. That started bothering me more than it probably should have.&lt;br&gt;
HRV 22 ms with stress at 78 is not the same nervous system state as HRV 61 ms after a clean night of sleep. They're completely different. Why would the same audio session help both of them? Nobody seemed to be asking that question.&lt;br&gt;
So I spent a few months building something that does.&lt;br&gt;
It's called Neurova. C#/.NET 8, WPF, runs fully offline. No account, no subscription, your data stays on your machine. You feed in your five daily metrics and it generates a unique audio session from them — physically modelled, every sound synthesised in real time from scratch. No samples. No DAW. Pure code.&lt;br&gt;
Each metric drives a different part of the synthesis:&lt;/p&gt;

&lt;p&gt;HRV → primary binaural beat frequency. 1 ms change moves the beat by about 0.08 Hz.&lt;br&gt;
Resting HR → carrier frequency and breath guide pace&lt;br&gt;
Deep sleep % → sub-bass resonance and delta brainwave layer weight&lt;br&gt;
REM % → theta layer frequency and which Solfeggio tone gets selected&lt;br&gt;
Stress score → noise color ratio, ambient layer intensity, secondary beat frequency&lt;/p&gt;

&lt;p&gt;The engine also builds a baseline from your own historical data — so the fingerprint is relative to your norms, not some population average that has nothing to do with you.&lt;/p&gt;

&lt;p&gt;The synthesis part — which is the part I'm most nervous about&lt;br&gt;
I'll be honest. Every sound being synthesised in real time from scratch is the thing I'm most proud of and also the thing I'm most uncertain about when I tell people.&lt;br&gt;
Guitar uses Karplus-Strong. The delay line length is sample rate divided by frequency. Each sample is the average of the previous two samples in the delay line — that averaging is what acts as the low-pass filter, which is what models string damping. You seed the line with an attack noise burst and it genuinely sounds like a plucked string. Getting it stable across all frequencies without aliasing took way longer than I expected. There's a point where the math is obvious and the implementation still isn't working and you just have to sit with it.&lt;br&gt;
Piano has 7 inharmonic partials with stretch tuning. I added dual-string detuning at ±0.0012 semitone for the natural chorus effect real pianos have from their doubled strings. Hammer noise on attack is a white noise burst, under 10ms, low-passed at 2kHz.&lt;br&gt;
Rain is dual-band bandpass from pink noise. Low band (300–800 Hz) for the body of rain, high band (3k–12kHz) for individual droplets. No LFO swell — I spent a while on this actually. Real rain doesn't pulsate. A lot of rain ambience in apps pulses rhythmically and it drives me crazy now that I've noticed it.&lt;br&gt;
Ocean is three layers: shore surge as a low frequency swell, mid-frequency wash, and high spray transients on wave crests. Wave period comes from your resting heart rate. Slower heart rate, longer swells.&lt;br&gt;
The reverb is a convolution implementation using partitioned overlap-save FFT. That was probably the most complex thing I've written in a long time.&lt;/p&gt;

&lt;p&gt;Back to the AI problem&lt;br&gt;
I tried using AI for the synthesis code. I kept trying actually, because it would have been faster.&lt;br&gt;
The problem is specific to this kind of work. When code is modelling something physically precise — string damping, wave periods derived from heart rate, inharmonic partials — wrong code doesn't fail. It just sounds slightly off. And if you don't own every line, you won't hear what's wrong. You'll hear something that sounds close enough and move on.&lt;br&gt;
Beyond that, I kept running into the same frustration. AI would validate broken logic just as confidently as working logic. Something would feel wrong to me before any test caught it, and the response was always some version of "this looks correct." I had to learn to trust that instinct over the confidence of the output. Which, honestly, is a weird thing to have to learn.&lt;br&gt;
I did use AI for other parts of the build — data handling, UI, export logic. That worked fine. But the synthesis engine I wrote myself, line by line, and I think the output is better for it even if I can't fully prove that.&lt;/p&gt;

&lt;p&gt;The Story Engine — which made the whole thing weirder and more interesting&lt;br&gt;
For a long time all sessions had the same shape. Quiet open, build, peak, resolve, dissolve. Only the instruments changed. The arc was identical every time.&lt;br&gt;
I eventually built what I called the Story Engine, which turns your biometric fingerprint into a five-act dramatic structure. Act 1 is the arrival, character shaped by your recovery score. Act 2 is the build, density driven by HRV. Act 3 is the climax, and its shape comes from your stress score — it can be a sharp peak, a double wave, or a plateau depending on where you are. Act 4 is descent. Act 5 is resolution, and there are three kinds: earned resolution, graceful surrender, or open horizon.&lt;br&gt;
The tension curve and the volume master curve are generated separately, which means emotional complexity and acoustic presence can move independently. Every instrument also has its own register arc — piano doesn't just get louder or quieter, it moves through its range. It might start mid-register, drop low during the deep phase, climb to its ceiling at the climax, then come back down. Guitar and strings have their own arcs. They're not all doing the same thing at the same time.&lt;br&gt;
Same health data on two different days still produces a different structure because the date rotates the arc shape.&lt;/p&gt;

&lt;p&gt;What I actually learned&lt;br&gt;
The physical modelling was the hardest part technically. The Story Engine was the most interesting problem — getting five completely different biometric dimensions to each shape a different aspect of a dramatic arc, without any of them stepping on each other, took a lot of iteration.&lt;br&gt;
But the thing I didn't expect to learn was about trusting my own judgment mid-build. Every time I felt like something was wrong and actually went looking, I found a real problem. Every time I accepted the AI's confidence over my instinct, I paid for it somewhere down the line.&lt;br&gt;
I don't think that's an argument against using AI. I think it's an argument for knowing when you're in territory where you can't outsource the feeling that something is off.&lt;/p&gt;

&lt;p&gt;If you've built something where the output had to be physically precise — audio synthesis, simulation, hardware interfacing, anything like that — I'm curious how you handled that gap. The place where AI says it looks fine and your gut says it doesn't. Is that specific to this kind of work or have you hit it somewhere else too?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Built a Desktop App That Reads Your HRV &amp; Sleep Data and Generates a Unique Therapeutic Soundscape Every Day</title>
      <dc:creator>Arlej</dc:creator>
      <pubDate>Tue, 12 May 2026 20:34:39 +0000</pubDate>
      <link>https://forem.com/arlejtech/i-built-a-desktop-app-that-reads-your-hrv-sleep-data-and-generates-a-unique-therapeutic-p7p</link>
      <guid>https://forem.com/arlejtech/i-built-a-desktop-app-that-reads-your-hrv-sleep-data-and-generates-a-unique-therapeutic-p7p</guid>
      <description>&lt;p&gt;I’ve been wearing an Amazfit GTR4 every night for a few months, tracking the usual stuff — HRV, resting heart rate, deep sleep %, REM %, stress score, and total sleep time (hours/minutes). Every morning I’d open the app, look at the five or six numbers, think “huh, not great today,” and then do absolutely nothing with them.&lt;/p&gt;

&lt;p&gt;I tried every sleep music app out there. Binaural beats, nature sounds, “deep sleep frequencies.” They all do the same thing: you pick a mode and they play a loop. Same audio, every day. But my body is all over the place. Some mornings I’m at 44 ms HRV, 3h51m of sleep, stressed to 23. Other days I’m at 67 ms and feel great. The same audio can’t possibly be optimal for both states.&lt;/p&gt;

&lt;p&gt;So I built NEUROVA. It’s a Windows desktop app that takes your actual daily health metrics and builds a completely new, physically‑modelled therapeutic soundscape from scratch. No samples. No DAW. Just code. You can pick how long you want it — from a 30 minutes up to 12 hours.&lt;/p&gt;

&lt;p&gt;How it works&lt;br&gt;
The app takes six numbers and date: HRV, resting HR, deep sleep %, REM %, stress score, and total sleep. It figures out how far each one is from your own historical normal (not some population average), then uses those “deficit” signals to drive every creative decision.&lt;/p&gt;

&lt;p&gt;It scores 20 possible sound categories — piano, guitar, strings, rain, ocean, fire, heartbeat, Tibetan bowls, Gregorian chant, breath guide, etc. — against your fingerprint. A date‑rotation so even identical biometrics two days in a row get a different palette.&lt;/p&gt;

&lt;p&gt;Then the Story Engine takes over. It converts those biometric signals into a five‑act dramatic arc with actual tension and volume curves, climax shapes, and a closing character. Each melodic instrument (piano, guitar, strings, bowls) gets its own independent register journey and presence timeline — they all move differently, like characters in a play.&lt;/p&gt;

&lt;p&gt;Phrase generation uses a Markov‑weighted motif library across complexity tiers, from single‑note meditative phrases to expressive jazz‑like runs. The engine can even modulate between D minor and F major pentatonic when recovery is high.&lt;/p&gt;

&lt;p&gt;All sounds are physically modelled in real time:&lt;/p&gt;

&lt;p&gt;Guitar: Karplus‑Strong delay line&lt;/p&gt;

&lt;p&gt;Piano: inharmonic partials with stretch tuning and dual‑string detuning&lt;/p&gt;

&lt;p&gt;Rain: dual‑band bandpass from pink noise&lt;/p&gt;

&lt;p&gt;Ocean: three‑layer model (surge, wash, spray), wave period from resting HR&lt;/p&gt;

&lt;p&gt;Fire: brown noise bed + Poisson‑timed crackles + occasional log thuds&lt;/p&gt;

&lt;p&gt;Tibetan bowls: measured real‑bowl ratios with long resonant tails&lt;/p&gt;

&lt;p&gt;Heartbeat: sub‑bass lub‑dub timed to your HR&lt;/p&gt;

&lt;p&gt;I also wrote a custom convolution reverb from scratch — partitioned OLA‑FFT, procedural room impulse responses using image‑source early reflections and an RT60‑matched decay tail. The acoustic space itself is unique per session.&lt;/p&gt;

&lt;p&gt;Everything is saved with a seed (biometrics + date) so you can replay the exact session.&lt;/p&gt;

&lt;p&gt;You can export generated soundscape.&lt;/p&gt;

&lt;p&gt;What was hard&lt;br&gt;
Karplus‑Strong sounds easy on paper. Getting it stable across all frequencies without aliasing was painful. The piano inharmonicity — 7 partials, each with different stretch‑tuning factors — almost made me give up. And writing a zero‑latency partitioned FFT convolver is not something I’d recommend unless you really hate yourself.&lt;/p&gt;

&lt;p&gt;The biggest surprise was how much the music style selection changes the experience. I added styles (Ambient, Classical, Jazz, Rock, Electronic, Metal…) because I got bored of everything sounding like a meditation app. Same biometric input, same therapeutic intent — but the Classical version feels like a piano‑led recovery session and the Metal version feels like a rhythmic, driven pulse. Both work, but they feel completely different.&lt;/p&gt;

&lt;p&gt;My own data&lt;br&gt;
I’ve been using NEUROVA every day. My HRV has slowly risen, stress dropped, and nights with zero sleep disappeared. Recently I did a weird experiment: I generated a session from my worst biometric day (HRV 44, 3h 51m sleep, stress 23), then deliberately waited two days before listening. The next‑morning data was… not what I expected. I’ll publish that follow‑up soon.&lt;/p&gt;

&lt;p&gt;See the app in action&lt;br&gt;
  &lt;iframe src="https://www.youtube.com/embed/zpiBQGB-o-w"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Want to try it? No download needed.&lt;br&gt;
The app isn’t publicly packaged yet, but I’m generating free personalised sessions for anyone who wants one. Just drop your numbers in the comments or on the YouTube video:&lt;/p&gt;

&lt;p&gt;-HRV (ms)&lt;br&gt;
-Resting HR (bpm)&lt;br&gt;
-Deep sleep %&lt;br&gt;
-REM %&lt;br&gt;
-Stress score (0‑100)&lt;br&gt;
-Total sleep (hours &amp;amp; minutes)&lt;br&gt;
-Date of measurement&lt;br&gt;
-How long you want it (e.g. 45 min, 2 hours, up to 12 hours)&lt;br&gt;
-Preferred music style (or Auto)&lt;/p&gt;

&lt;p&gt;I’ll take your biometric fingerprint, turn it into a soundscape, and publish it on the channel. No cost, no app download, no account.&lt;/p&gt;

</description>
      <category>proceduralgeneration</category>
      <category>audio</category>
      <category>showdev</category>
      <category>python</category>
    </item>
  </channel>
</rss>
