<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Capa</title>
    <description>The latest articles on Forem by Capa (@capar).</description>
    <link>https://forem.com/capar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/capar"/>
    <language>en</language>
    <item>
      <title>Building an Audio Analyzer Through Conversation with GitHub Copilot CLI</title>
      <dc:creator>Capa</dc:creator>
      <pubDate>Mon, 02 Feb 2026 00:16:02 +0000</pubDate>
      <link>https://forem.com/capar/building-an-audio-analyzer-through-conversation-with-github-copilot-cli-1a62</link>
      <guid>https://forem.com/capar/building-an-audio-analyzer-through-conversation-with-github-copilot-cli-1a62</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;mixref&lt;/strong&gt; is a CLI audio analyzer for music producers. It analyzes audio files and provides loudness metrics, BPM detection, musical key, and spectral analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;When producing music, I often need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check loudness levels (LUFS) for different streaming platforms&lt;/li&gt;
&lt;li&gt;Detect BPM and musical key quickly&lt;/li&gt;
&lt;li&gt;Compare my mix against reference tracks&lt;/li&gt;
&lt;li&gt;See frequency distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted a fast command-line tool instead of opening a DAW or multiple plugins.&lt;/p&gt;

&lt;h3&gt;
  
  
  What It Does
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Loudness Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Measures LUFS (EBU R128 standard)&lt;/li&gt;
&lt;li&gt;Shows true peak and loudness range&lt;/li&gt;
&lt;li&gt;Compares against platform targets (Spotify: -14 LUFS, YouTube: -14 LUFS, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Musical Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detects BPM using librosa&lt;/li&gt;
&lt;li&gt;Includes basic correction for half-time detection (doubles if BPM &amp;lt; 100)&lt;/li&gt;
&lt;li&gt;Detects musical key with Camelot notation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Spectral Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Breaks down audio into 5 frequency bands&lt;/li&gt;
&lt;li&gt;Shows energy distribution as percentages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Track Comparison&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compares two tracks side-by-side&lt;/li&gt;
&lt;li&gt;Shows loudness and spectral differences&lt;/li&gt;
&lt;li&gt;Highlights significant differences (&amp;gt;3% threshold)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Command Line Interface
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Analyze Command:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FCaparrini%2Fmixref%2Fmain%2Fdocs%2Fdemos%2Fanalyze-demo.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FCaparrini%2Fmixref%2Fmain%2Fdocs%2Fdemos%2Fanalyze-demo.svg" alt="Analyze Demo" width="800" height="512.0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compare Command:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FCaparrini%2Fmixref%2Fmain%2Fdocs%2Fdemos%2Fcompare-demo.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FCaparrini%2Fmixref%2Fmain%2Fdocs%2Fdemos%2Fcompare-demo.svg" alt="Compare Demo" width="880" height="597.0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Output
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;mixref analyze track.wav

             Analysis: track.wav             
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Metric              ┃        Value ┃ Status ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━┩
│ Integrated Loudness │    &lt;span class="nt"&gt;-6&lt;/span&gt;.2 LUFS │   🔴   │
│ True Peak           │    &lt;span class="nt"&gt;-0&lt;/span&gt;.8 dBTP │   ⚠️   │
│ Loudness Range      │       5.2 LU │   ℹ    │
├─────────────────────┼──────────────┼────────┤
│ Tempo               │    174.0 BPM │   ❓   │
│ Key                 │ F minor &lt;span class="o"&gt;(&lt;/span&gt;4A&lt;span class="o"&gt;)&lt;/span&gt; │   ❓   │
├─────────────────────┼──────────────┼────────┤
│ Sub                 │   ■■■■■■■□□□ │ 35.2%  │
│ Low                 │   ■■■■■■■■■□ │ 28.4%  │
│ Mid                 │   ■■■■□□□□□□ │ 18.1%  │
│ High                │   ■■■■■■□□□□ │ 14.2%  │
│ Air                 │   ■□□□□□□□□□ │  4.1%  │
└─────────────────────┴──────────────┴────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Try It Yourself
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install from PyPI&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;mixref

&lt;span class="c"&gt;# Analyze a track&lt;/span&gt;
mixref analyze my_track.wav

&lt;span class="c"&gt;# Compare against a reference&lt;/span&gt;
mixref compare my_mix.wav pro_reference.wav &lt;span class="nt"&gt;--bpm&lt;/span&gt; &lt;span class="nt"&gt;--key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Repository&lt;/strong&gt;: &lt;a href="https://github.com/Caparrini/mixref" rel="noopener noreferrer"&gt;github.com/Caparrini/mixref&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://caparrini.github.io/mixref/" rel="noopener noreferrer"&gt;caparrini.github.io/mixref&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;PyPI&lt;/strong&gt;: &lt;a href="https://pypi.org/project/mixref/" rel="noopener noreferrer"&gt;pypi.org/project/mixref&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;This project was built &lt;strong&gt;entirely through conversation&lt;/strong&gt; with GitHub Copilot CLI. I didn't type code—I described what I wanted in plain language.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Development Process
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Describing Instead of Coding&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of writing Python code, I asked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Create a function that calculates EBU R128 loudness using pyloudnorm.
Return integrated LUFS, true peak, and loudness range.
Include type hints and error handling."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copilot generated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working implementation with pyloudnorm&lt;/li&gt;
&lt;li&gt;Complete type hints&lt;/li&gt;
&lt;li&gt;Error handling for edge cases&lt;/li&gt;
&lt;li&gt;Docstrings with examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Debugging Through Conversation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When BPM detection returned 0.0, I described the problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"BPM detection returns 0.0. Librosa warning says:
'n_fft=2048 too large for input signal of length=2'
The stereo-to-mono conversion might be wrong."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copilot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identified the bug (&lt;code&gt;np.mean(audio, axis=0)&lt;/code&gt; was wrong for the array shape)&lt;/li&gt;
&lt;li&gt;Fixed it with format detection&lt;/li&gt;
&lt;li&gt;Updated tests&lt;/li&gt;
&lt;li&gt;All tests still passed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Building Development Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I asked for a pre-commit quality check script. Copilot created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/pre-commit-check.sh

🔍 Running pre-commit quality checks...
1️⃣ Formatting with ruff
2️⃣ Linting
3️⃣ Type checking
4️⃣ Tests
🎉 All checks passed!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This catches issues before pushing to GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Testing and Documentation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I asked for tests using synthetic audio (not real files). Copilot created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;154 tests with 90% coverage&lt;/li&gt;
&lt;li&gt;Test fixtures generating sine waves and pink noise&lt;/li&gt;
&lt;li&gt;No real audio files committed (all synthetic)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sphinx setup with API docs&lt;/li&gt;
&lt;li&gt;Example gallery&lt;/li&gt;
&lt;li&gt;Terminal recordings with termtosvg&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What Worked Well
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Speed&lt;/strong&gt;: Building features was much faster. What might take hours was done in focused conversations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Quality&lt;/strong&gt;: The generated code follows conventions I might have skipped:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Type hints throughout&lt;/li&gt;
&lt;li&gt;Consistent error handling&lt;/li&gt;
&lt;li&gt;Complete docstrings&lt;/li&gt;
&lt;li&gt;Good test coverage (90%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learning&lt;/strong&gt;: I learned audio processing concepts (EBU R128, Camelot wheel) by implementing them, not just reading documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context&lt;/strong&gt;: Copilot remembered decisions across sessions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Use Rich for output" → all commands used Rich tables&lt;/li&gt;
&lt;li&gt;"Only synthetic audio in tests" → no real files in the repo&lt;/li&gt;
&lt;li&gt;Consistent coding style throughout&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Not Magic&lt;/strong&gt;: I still needed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand what I was asking for&lt;/li&gt;
&lt;li&gt;Review and test the generated code&lt;/li&gt;
&lt;li&gt;Fix edge cases&lt;/li&gt;
&lt;li&gt;Make architectural decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration&lt;/strong&gt;: Sometimes the first solution wasn't quite right. I learned to describe problems clearly and iterate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Real Audio&lt;/strong&gt;: Synthetic test signals don't catch all edge cases. Testing with real audio files revealed bugs (like the BPM detection issue).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Real Shift
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Before&lt;/strong&gt;: Think → Code → Debug → Test → Fix&lt;br&gt;&lt;br&gt;
&lt;strong&gt;With Copilot CLI&lt;/strong&gt;: Think → Describe → Review → Test&lt;/p&gt;

&lt;p&gt;I spent more time on &lt;strong&gt;what&lt;/strong&gt; to build and &lt;strong&gt;how it should work&lt;/strong&gt;, less time on syntax and boilerplate.&lt;/p&gt;
&lt;h3&gt;
  
  
  Metrics
&lt;/h3&gt;

&lt;p&gt;The project includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;750 lines&lt;/strong&gt; of source code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;154 tests&lt;/strong&gt; (90% coverage)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;19 modules&lt;/strong&gt; (all type-checked)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 CI/CD workflows&lt;/strong&gt; (tests, docs, quality checks)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete documentation&lt;/strong&gt; with Sphinx&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Built through conversations, not manual coding.&lt;/p&gt;
&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clear descriptions matter&lt;/strong&gt; - The better I described what I wanted, the better the results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review everything&lt;/strong&gt; - Generated code needs testing and validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterate naturally&lt;/strong&gt; - "Actually, what if..." works well with Copilot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests are essential&lt;/strong&gt; - Having good tests catches issues in generated code&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot CLI changed how I approach development. Instead of typing code, I focused on describing problems and reviewing solutions.&lt;/p&gt;

&lt;p&gt;mixref is a functional audio analysis tool. It does what I need it to do. The interesting part is how it was built—entirely through conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;mixref
mixref analyze your_track.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Source code&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://github.com/Caparrini/mixref" rel="noopener noreferrer"&gt;github.com/Caparrini/mixref&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built with GitHub Copilot CLI and curiosity about AI-assisted development.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>audio</category>
      <category>python</category>
      <category>cli</category>
    </item>
  </channel>
</rss>
