<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bowen</title>
    <description>The latest articles on Forem by Bowen (@tuwenbo).</description>
    <link>https://forem.com/tuwenbo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tuwenbo"/>
    <language>en</language>
    <item>
      <title>I Built a Zero-Code LLM Fine-Tuning App for Mac — Here's How It Works</title>
      <dc:creator>Bowen</dc:creator>
      <pubDate>Fri, 13 Feb 2026 11:52:01 +0000</pubDate>
      <link>https://forem.com/tuwenbo/i-built-a-zero-code-llm-fine-tuning-app-for-mac-heres-how-it-works-4027</link>
      <guid>https://forem.com/tuwenbo/i-built-a-zero-code-llm-fine-tuning-app-for-mac-heres-how-it-works-4027</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Fine-tuning LLMs locally on a Mac usually means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wrestling with Python environments&lt;/li&gt;
&lt;li&gt;Writing training scripts&lt;/li&gt;
&lt;li&gt;Managing model weights manually&lt;/li&gt;
&lt;li&gt;Figuring out how to actually &lt;em&gt;use&lt;/em&gt; your fine-tuned model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted something simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: M-Courtyard
&lt;/h2&gt;

&lt;p&gt;M-Courtyard is an open-source macOS desktop app that handles the entire &lt;br&gt;
fine-tuning pipeline through a visual interface:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Import&lt;/strong&gt; your documents (txt, pdf, docx)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate&lt;/strong&gt; training data automatically using your local Ollama model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-tune&lt;/strong&gt; with LoRA — watch the loss curve in real-time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test&lt;/strong&gt; your model with built-in chat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export&lt;/strong&gt; to Ollama with one click (Q4/Q8/F16 quantization)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mir80h5ufzis93fqq1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mir80h5ufzis93fqq1l.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne3wl6rh9prrkpf8cujd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne3wl6rh9prrkpf8cujd.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9apvkz458v720dhgd87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9apvkz458v720dhgd87.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Everything Runs Locally
&lt;/h3&gt;

&lt;p&gt;No cloud services. No API keys. No data leaves your Mac. &lt;/p&gt;

&lt;p&gt;It's built on Apple's MLX framework, which takes full advantage of &lt;br&gt;
Apple Silicon's unified memory architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmouejwh2lkc72z9xi9cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmouejwh2lkc72z9xi9cj.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gjm877761refsl9krrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gjm877761refsl9krrh.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha6q2t87bwwyobtpziml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha6q2t87bwwyobtpziml.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9de2imdb8v1bxv3pqnlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9de2imdb8v1bxv3pqnlu.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Supported Models
&lt;/h3&gt;

&lt;p&gt;Works with models from the mlx-community on Hugging Face:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Qwen 3 (0.6B to 14B)&lt;/li&gt;
&lt;li&gt;DeepSeek R1&lt;/li&gt;
&lt;li&gt;GLM 5&lt;/li&gt;
&lt;li&gt;Llama 3&lt;/li&gt;
&lt;li&gt;And many more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72bh5q67x9s2n1g8s6eg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72bh5q67x9s2n1g8s6eg.gif" alt=" " width="720" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  One-Click Export to Ollama
&lt;/h3&gt;

&lt;p&gt;After fine-tuning, export your model directly to Ollama. &lt;br&gt;
Then just run &lt;code&gt;ollama run your-model-name&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0wndq2r1pmbpfxk8vhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0wndq2r1pmbpfxk8vhb.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cqyc1oia4ocwaweanl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cqyc1oia4ocwaweanl8.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: React + TypeScript&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Rust (Tauri 2.x)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ML Engine&lt;/strong&gt;: mlx-lm (Apple MLX)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Runtime&lt;/strong&gt;: Ollama&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License&lt;/strong&gt;: AGPL 3.0&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/Mcourtyard/m-courtyard" rel="noopener noreferrer"&gt;github.com/tuwenbo0120/m-courtyard&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Download&lt;/strong&gt;: &lt;a href="https://github.com/Mcourtyard/m-courtyard/releases/latest" rel="noopener noreferrer"&gt;Latest Release (.dmg)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discord&lt;/strong&gt;: &lt;a href="https://discord.gg/hjkrHWrQ" rel="noopener noreferrer"&gt;Join the community&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Requirements: macOS 14+, Apple Silicon (M1/M2/M3/M4), 16GB+ RAM recommended&lt;/p&gt;

&lt;p&gt;Feedback and contributions welcome! ⭐&lt;/p&gt;

</description>
      <category>xamarin</category>
      <category>opensource</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
