<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Santhosh</title>
    <description>The latest articles on Forem by Santhosh (@wydoinn).</description>
    <link>https://forem.com/wydoinn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/wydoinn"/>
    <language>en</language>
    <item>
      <title>I Scraped 120 Years of Olympic History — and You Can Too</title>
      <dc:creator>Santhosh</dc:creator>
      <pubDate>Wed, 13 Aug 2025 13:17:45 +0000</pubDate>
      <link>https://forem.com/wydoinn/i-scraped-120-years-of-olympic-history-and-you-can-too-with-python-3l43</link>
      <guid>https://forem.com/wydoinn/i-scraped-120-years-of-olympic-history-and-you-can-too-with-python-3l43</guid>
      <description>&lt;p&gt;I’ve always been fascinated by the Olympics.&lt;/p&gt;

&lt;p&gt;The stories, the records, the triumphs… but when I went looking for a clean dataset of &lt;strong&gt;every athlete in history&lt;/strong&gt;, I hit a wall.&lt;/p&gt;

&lt;p&gt;Sure, there’s &lt;a href="https://www.olympedia.org" rel="noopener noreferrer"&gt;Olympedia.org&lt;/a&gt; — an incredible resource — but no “Download” button.&lt;/p&gt;

&lt;p&gt;So I decided:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the dataset doesn’t exist, I’ll build it myself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The result? A &lt;strong&gt;Python scraper&lt;/strong&gt; that can pull &lt;strong&gt;every athlete profile&lt;/strong&gt; from 1896 to today — perfect for &lt;strong&gt;data analysis&lt;/strong&gt; and &lt;strong&gt;visualization projects&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  📌 What This Script Does
&lt;/h2&gt;

&lt;p&gt;With one command, you get:&lt;br&gt;
✅ Name, gender, height, weight&lt;br&gt;
✅ Birth &amp;amp; death info (date, city, country)&lt;br&gt;
✅ National Olympic Committee (NOC)&lt;br&gt;
✅ Last Olympic Games and sport&lt;br&gt;
✅ Medal counts (gold, silver, bronze)&lt;/p&gt;

&lt;p&gt;Saved neatly in a &lt;strong&gt;CSV&lt;/strong&gt; ready for &lt;strong&gt;Pandas&lt;/strong&gt; or &lt;strong&gt;Excel&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  📊 What You Can Do With It
&lt;/h2&gt;

&lt;p&gt;This isn’t just about scraping.&lt;/p&gt;

&lt;p&gt;Once you have the data, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visualize &lt;strong&gt;medal trends&lt;/strong&gt; over decades&lt;/li&gt;
&lt;li&gt;Explore &lt;strong&gt;which sports certain countries dominate&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Study &lt;strong&gt;athlete physique trends&lt;/strong&gt; (height/weight) over time&lt;/li&gt;
&lt;li&gt;Map &lt;strong&gt;birthplaces of medalists&lt;/strong&gt; with GeoPandas&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  ⚡ How Fast Is It?
&lt;/h2&gt;

&lt;p&gt;With &lt;strong&gt;10 threads&lt;/strong&gt; and a &lt;strong&gt;0.4 second delay&lt;/strong&gt; per request,&lt;br&gt;
you can scrape thousands of athletes in under an hour — &lt;strong&gt;without hammering&lt;/strong&gt; the site.&lt;/p&gt;


&lt;h2&gt;
  
  
  🚀 Quick Start
&lt;/h2&gt;

&lt;p&gt;1️⃣ Clone the repo&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Wydoinn/Olympedia-Athlete-Scraper.git
&lt;span class="nb"&gt;cd &lt;/span&gt;Olympedia-Athlete-Scraper
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Run the scraper&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start fresh&lt;/span&gt;
python scraper.py &lt;span class="nt"&gt;--start&lt;/span&gt; 1 &lt;span class="nt"&gt;--concurrency&lt;/span&gt; 10 &lt;span class="nt"&gt;--delay&lt;/span&gt; 0.4 &lt;span class="nt"&gt;--csv&lt;/span&gt; olympedia.csv

&lt;span class="c"&gt;# Or resume where you left off&lt;/span&gt;
python scraper.py &lt;span class="nt"&gt;--resume&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ Open &lt;code&gt;olympedia.csv&lt;/code&gt; and start exploring.&lt;/p&gt;




&lt;h2&gt;
  
  
  📂 The Data Format
&lt;/h2&gt;

&lt;p&gt;Example row:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;athlete_id,name,sex,height_cm,weight_kg,born_date,died_date,
born_city,born_region,born_country,died_city,died_region,died_country,
noc,games,year,sport,gold_medal,silver_medal,bronze_medal
19,Maurice Germot,M,178,68,1882-11-15,1958-01-06,
Vichy,Allier,FRA,Vichy,Allier,FRA,FRA,
Stockholm 1912,1912,Tennis,0,2,0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧠 How It Works (in 20 Seconds)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-threaded&lt;/strong&gt; with &lt;code&gt;ThreadPoolExecutor&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resumable&lt;/strong&gt; with a &lt;code&gt;progress.json&lt;/code&gt; checkpoint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-stops&lt;/strong&gt; after 1000 consecutive missing IDs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parses HTML&lt;/strong&gt; using BeautifulSoup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Writes CSV&lt;/strong&gt; as it runs (so you can peek mid-scrape)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧹 A Note on Responsible Scraping
&lt;/h2&gt;

&lt;p&gt;Please be respectful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep a delay between requests&lt;/li&gt;
&lt;li&gt;Don’t flood the server&lt;/li&gt;
&lt;li&gt;Always credit the source (Olympedia)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;💬 What would you want to analyze first?&lt;br&gt;
Drop a comment and let’s brainstorm!&lt;/p&gt;




</description>
      <category>programming</category>
      <category>beginners</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>YOLOv10 on Custom Dataset</title>
      <dc:creator>Santhosh</dc:creator>
      <pubDate>Sun, 09 Jun 2024 12:33:41 +0000</pubDate>
      <link>https://forem.com/wydoinn/yolov10-on-custom-dataset-4dld</link>
      <guid>https://forem.com/wydoinn/yolov10-on-custom-dataset-4dld</guid>
      <description>&lt;h2&gt;
  
  
  What is YOLO?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;You Only Look Once (YOLO)&lt;/strong&gt; is a state-of-the-art, real-time object detection algorithm .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22c4x05mrv5pu3ijjo26.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22c4x05mrv5pu3ijjo26.jpeg" alt="YOLOv10 Architecture" width="474" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes YOLO popular?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Speed &lt;/li&gt;
&lt;li&gt;Detection accuracy &lt;/li&gt;
&lt;li&gt;Good generalization &lt;/li&gt;
&lt;li&gt;Open-source&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Google Colab is an excellent platform for running deep learning models due to its free access to GPUs and ease of use. This guide will walk you through the process of running the latest version, YOLOv10, on Google Colab.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before You Start
&lt;/h3&gt;

&lt;p&gt;To make sure that you have access to GPU. You can use &lt;code&gt;nvidia-smi&lt;/code&gt; command to do that. In case of any problems navigate to &lt;code&gt;Edit&lt;/code&gt; -&amp;gt; &lt;code&gt;Notebook settings&lt;/code&gt; -&amp;gt; &lt;code&gt;Hardware accelerator&lt;/code&gt;, set it to &lt;code&gt;GPU&lt;/code&gt;, and then click &lt;code&gt;Save&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!nvidia-smi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Required Packages
&lt;/h3&gt;

&lt;p&gt;Clone the GitHub repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!git clone https://github.com/THU-MIG/yolov10.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd yolov10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Upload Data To Colab
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Mount Google Drive
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from google.colab import drive
drive.mount('/content/drive')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Upload Files Directly
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from google.colab import files
uploaded = files.upload()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Organize Data for YOLOv10
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Images:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The images directory contains subdirectories for train and val (validation) sets.&lt;/li&gt;
&lt;li&gt;Each subdirectory contains the corresponding images for training and validation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Labels:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The labels directory mirrors the images directory structure.&lt;/li&gt;
&lt;li&gt;Each text file in the labels/train and labels/val subdirectories contains the annotations for the corresponding images.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Annotations Format:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/my_dataset
  /images
    /train
      image1.jpg
      image2.jpg
      ...
    /val
      image1.jpg
      image2.jpg
      ...
  /labels
    /train
      image1.txt
      image2.txt
      ...
    /val
      image1.txt
      image2.txt
      ...
  data.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Data Configuration File (data.yaml):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;train: /content/my_dataset/images/train
val: /content/my_dataset/images/val

nc: N     # N for number of classes
names: ['class1', 'class2', ..., 'classN']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Download Pre-trained Weights
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import urllib.request

# Create a directory for the weights in the current working directory
weights_dir = os.path.join(os.getcwd(), "weights")
os.makedirs(weights_dir, exist_ok=True)

# URLs of the weight files
urls = [
    "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10n.pt",
    "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10s.pt",
    "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10m.pt",
    "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10b.pt",
    "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10x.pt",
    "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10l.pt"
]

# Download each file
for url in urls:
    file_name = os.path.join(weights_dir, os.path.basename(url))
    urllib.request.urlretrieve(url, file_name)
    print(f"Downloaded {file_name}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Train Custom Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!yolo task=detect mode=train epochs=100 batch=4 plots=True model=weights/yolov10n.pt data=data.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inference on Image
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!yolo task=detect mode=predict conf=0.25 save=True model=runs/detect/train/weights/best.pt source=img.jpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inference on Video
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!yolo task=detect mode=predict conf=0.25 save=True model=runs/detect/train/weights/best.pt source=video.mp4

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This guide covers running YOLOv10 on Google Colab by setting up the environment, installing necessary libraries, and running inference with pre-trained weights. It also explains how to upload and organize data in Colab for YOLOv8, including the required directory structure and configuration files. These steps enable efficient training and inference for object detection models using Colab's resources.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>python</category>
    </item>
    <item>
      <title>Run Local LLMs Using LM Studio</title>
      <dc:creator>Santhosh</dc:creator>
      <pubDate>Sun, 07 Apr 2024 12:40:43 +0000</pubDate>
      <link>https://forem.com/wydoinn/run-local-llms-using-lm-studio-4h2a</link>
      <guid>https://forem.com/wydoinn/run-local-llms-using-lm-studio-4h2a</guid>
      <description>&lt;p&gt;In this article, I'll guide you through the process of running open-source large language models on a computer using &lt;strong&gt;LM Studio&lt;/strong&gt;. LM Studio is compatible with macOS, Linux, and Windows.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What you can find in this article?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is LM Studio?&lt;/li&gt;
&lt;li&gt;What are the minimum hardware / software requirements?&lt;/li&gt;
&lt;li&gt;Installing LM Studio on Windows&lt;/li&gt;
&lt;li&gt;Running LM Studio&lt;/li&gt;
&lt;li&gt;Chat with your model&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  1. What is LM Studio?
&lt;/h2&gt;

&lt;p&gt;Software for running large language models (LLMs) locally. This is the most relevant meaning in the context of AI and machine learning. LM Studio is a software application that allows you to download, install, and run powerful LLMs on your own computer. This gives you more control and privacy compared to using cloud based LLMs like ChatGPT.&lt;/p&gt;

&lt;p&gt;Here are some key features of LM Studio for running LLMs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discover and download various LLMs&lt;/li&gt;
&lt;li&gt;Run models on your local machine with a compatible GPU&lt;/li&gt;
&lt;li&gt;Integrate with AnythingLLM for a chatbot interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're interested in learning more about LM Studio for LLMs, you can refer to their &lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;official website&lt;/a&gt; or their &lt;a href="https://github.com/lmstudio-ai" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. What are the minimum hardware / software requirements?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Apple Silicon Mac (M1/M2/M3) with macOS 13.6 or newer&lt;/li&gt;
&lt;li&gt;Windows / Linux PC with a processor that supports AVX2 (typically newer PCs)&lt;/li&gt;
&lt;li&gt;16GB+ of RAM is recommended. For PCs, 6GB+ of VRAM is recommended&lt;/li&gt;
&lt;li&gt;NVIDIA/AMD GPUs supported&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3. Installing LM Studio on Windows
&lt;/h2&gt;

&lt;p&gt;LM Studio works flawlessly with Windows, Mac, and Linux. These quick instructional leads you through the installation processes, particularly for Windows PC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c6d0fp237icj2kpyqwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c6d0fp237icj2kpyqwq.png" alt="Download" width="422" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➡️ Go to LM Studio page and download the file: &lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;Download&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the file&lt;/li&gt;
&lt;li&gt;Open the file (.exe)&lt;/li&gt;
&lt;li&gt;It will automatically install on (C:) drive&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Running LM Studio
&lt;/h2&gt;

&lt;p&gt;Once LM Studio has been set up, you may open your application and download various models locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3tt8tn6yp93z47708wo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3tt8tn6yp93z47708wo.png" alt="Home" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can browse for any model available on &lt;a href="https://huggingface.co/models" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;. And in the page's upper right corner displays the estimated RAM and VRAM capacities. You can access the model card on the Hugging Face website in your browser and also use the README.md file to learn about the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1fvjxxtwlftzqeelnx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1fvjxxtwlftzqeelnx8.png" alt="Search" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The available files will be provided together with the compatibility of hardware, whether it can run or not. It shows the following compatibility guesses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full GPU offload possible&lt;/li&gt;
&lt;li&gt;Partial GPU offload possible&lt;/li&gt;
&lt;li&gt;Some GPU offload Possible&lt;/li&gt;
&lt;li&gt;⚠️ Likely to large for this machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9blsp0q38e9trgwva0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9blsp0q38e9trgwva0i.png" alt="Files" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After selecting the appropriate model for your computer, you may download and execute the model.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Chat with your model
&lt;/h2&gt;

&lt;p&gt;Go to the chat page, load the model you have downloaded, and provide the prompt, it will respond with the proper answer. You can receive the response in plaintext, markdown, and monospace format. You may export the chat as JSON, simple text, formatted prompt, or as a snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ckciunxgouuh5a9wu04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ckciunxgouuh5a9wu04.png" alt="Chat" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure the hardware, inference parameter, prompt format, and model initialization under the advanced configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65rpzsgkllojyy79uyqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65rpzsgkllojyy79uyqo.png" alt="Hardware" width="312" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcowz78aos6ohao3uv4gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcowz78aos6ohao3uv4gw.png" alt="Inference Parameters" width="311" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also load many models at once to receive diverse responses to the prompts that have been given.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jcihvutaew4klsamuv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jcihvutaew4klsamuv4.png" alt="Multiple Model" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LM Studio's local server lets you use LLMs on your machine mimicking OpenAI's API for privacy, customization, and easier integration with existing code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0efe6pwi1pe3rsvof2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0efe6pwi1pe3rsvof2x.png" alt="Local HTTP server" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Summary
&lt;/h2&gt;

&lt;p&gt;LM Studio is a software application that allows you to download, install, and run powerful large language models (LLMs) on your own computer. This gives you more control and privacy compared to using open source LLMs. The guide outlines the requirements to run LM Studio and provides step-by-step instructions on how to install it on Windows, download models, and chat with them. LM Studio also offers features like advanced configuration, loading multiple models, and a local server for easier integration with existing code.&lt;/p&gt;




</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>python</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
