<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jayendra Matarage (Black Burn)</title>
    <description>The latest articles on Forem by Jayendra Matarage (Black Burn) (@jayendra_matarageblack).</description>
    <link>https://forem.com/jayendra_matarageblack</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jayendra_matarageblack"/>
    <language>en</language>
    <item>
      <title>Day 1: Project "Local AI Workstation" | Reclaiming the Core: System Reset</title>
      <dc:creator>Jayendra Matarage (Black Burn)</dc:creator>
      <pubDate>Thu, 02 Apr 2026 15:57:39 +0000</pubDate>
      <link>https://forem.com/jayendra_matarageblack/day-1-project-local-ai-workstation-reclaiming-the-core-system-reset-41gg</link>
      <guid>https://forem.com/jayendra_matarageblack/day-1-project-local-ai-workstation-reclaiming-the-core-system-reset-41gg</guid>
      <description>&lt;p&gt;A few weeks back, I tried running Ollama on my main Windows 11 rig. It should have been effortless, but it quickly turned into a nightmare of system freezes and cryptic errors. The issues vanished only after I completely wiped Ollama, leaving the root cause a mystery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Workstation Rig (Initial Attempt):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Processor: Intel Core i7-14700K (20 Cores, 28 Threads, 3400 MHz)&lt;/li&gt;
&lt;li&gt;Memory: 32GB RAM&lt;/li&gt;
&lt;li&gt;Storage: 512GB NVMe + 1TB SSD&lt;/li&gt;
&lt;li&gt;Graphics: NVIDIA RTX 3060 Series&lt;/li&gt;
&lt;li&gt;OS: Windows 11&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of wrestling with my Windows workstation, I’ve decided to pivot. I’m repurposing my old laptop, MSI GE65 Raider to serve as a dedicated Linux-based AI node. It’s time to get closer to the metal and build a stable environment where I can experiment without crashing my main workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hardware: MSI GE65 Raider&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU: Intel Core i7-9750H&lt;/li&gt;
&lt;li&gt;GPU: NVIDIA GeForce RTX 2070 (Essential for those CUDA cores)&lt;/li&gt;
&lt;li&gt;Memory: 16GB RAM&lt;/li&gt;
&lt;li&gt;Storage: 2x 512GB NVMe + 1TB SSD + 1TB HDD (Plenty of room for LLM weights)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The OS Choice: Why Pop!_OS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For a local AI rig, there are some high-level engineering reasons why it beats standard Ubuntu or Windows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Native NVIDIA Integration: Unlike other distros where you "install" drivers, Pop!_OS treats NVIDIA as a first-class citizen. The dedicated ISO comes with a vertically integrated stack that avoids the "black screen" or stuttering issues common with laptop GPU switching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rust-Powered COSMIC Desktop: It’s 2026, and the new COSMIC DE (written in Rust) is a game-changer. It’s memory-safe, incredibly lightweight, and highly efficient with system resources—exactly what you want when you're pushing a GPU to its limits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System76 Scheduler &amp;amp; Power Management: It includes a custom scheduler that prioritizes the active process. When a model is running, the OS ensures the LLM gets the CPU/GPU cycles it needs without background bloat interference.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tensor Management (Tensorman): Pop!_OS includes specialized tools like tensorman to manage toolchains in containers, making it one of the most "plug-and-play" environments for CUDA-based development.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Installation Process&lt;br&gt;
To keep things efficient, I used Ventoy to create a multi-boot drive—honestly, easiest way to handle ISOs these days. I targeted one of the 512GB NVMe drives for the OS install to ensure lightning-fast swap and boot times.&lt;/p&gt;

&lt;p&gt;Once the desktop loaded, I went straight to the terminal to prep the environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Standard system refresh&lt;br&gt;
&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt full-upgrade -y&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grabbing essential media codecs and Microsoft fonts&lt;br&gt;
&lt;code&gt;sudo apt install ubuntu-restricted-extras -y&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;_👀 Preview for Day 2: The Ollama Deployment&lt;/p&gt;

&lt;p&gt;The next day, we move from "Fresh OS" to "AI Server." I’ll be walking through the Essential OS Conditions for a stable Ollama install:&lt;/p&gt;

&lt;p&gt;NVIDIA Kernel Verification: Ensuring the OS actually "sees" the RTX 2070 via nvidia-smi.&lt;br&gt;
CUDA Toolkit Prep: Why you need it even if the driver is pre-installed.&lt;br&gt;
The One-Liner: Deploying Ollama and verifying the systemd service.&lt;/p&gt;

&lt;p&gt;The big question: Can a laptop from a few years ago outperform a 2026 Windows workstation in raw AI stability?_&lt;/p&gt;

&lt;h1&gt;
  
  
  LocalAI #Ollama #Linux #PopOS #Ventoy #NVIDIA #RTX2070
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/pulse/day-1-project-local-ai-workstation-reclaiming-core-system-matarage-ymorc/" rel="noopener noreferrer"&gt;LinkedInPost&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>linux</category>
      <category>pgaichallenge</category>
      <category>nvidia</category>
    </item>
  </channel>
</rss>
