<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Glows.ai</title>
    <description>The latest articles on Forem by Glows.ai (@glowsai).</description>
    <link>https://forem.com/glowsai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/glowsai"/>
    <language>en</language>
    <item>
      <title>How to Download HuggingFace Models on Glows.ai</title>
      <dc:creator>Glows.ai</dc:creator>
      <pubDate>Thu, 02 Apr 2026 15:25:57 +0000</pubDate>
      <link>https://forem.com/glowsai/how-to-download-huggingface-models-on-glowsai-1jm3</link>
      <guid>https://forem.com/glowsai/how-to-download-huggingface-models-on-glowsai-1jm3</guid>
      <description>&lt;p&gt;This tutorial explains how to download HuggingFace models on &lt;strong&gt;Glows.ai&lt;/strong&gt;, using two available methods: Glows.ai Datadrive storage (local download and upload), and instance-based storage (download directly inside an instance).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Glows.ai Datadrive storage&lt;/strong&gt;: Data can be read and written without limits as long as your Storage Space Plan remains valid. Download speed depends on your local network, making this option suitable for users who frequently need to download the same data (e.g., for model serving).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance-based storage&lt;/strong&gt;: Data is valid only while the instance is running; once the instance is released, the data will be deleted. Because it shares the bandwidth of the instance’s data center, download speeds are fast. This method is suitable for users who need the data for one-time use (e.g., testing model performance).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Glows.ai Datadrive Storage
&lt;/h2&gt;

&lt;p&gt;This method saves data to &lt;strong&gt;Glows.ai Datadrive&lt;/strong&gt;. Make sure your &lt;code&gt;Space Storage&lt;/code&gt; plan provides enough capacity, and allocate that capacity to the DataDrive in the region you intend to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Allocate Storage
&lt;/h3&gt;

&lt;p&gt;Suppose you need to download a model of &lt;strong&gt;65GB&lt;/strong&gt; and plan to use an &lt;strong&gt;NVIDIA GeForce RTX 4090&lt;/strong&gt; GPU in the &lt;strong&gt;TW-03 region&lt;/strong&gt;. You’ll first need to go to &lt;a href="https://platform.glows.ai/space" rel="noopener noreferrer"&gt;Storage Space&lt;/a&gt; and purchase a &lt;strong&gt;100GB&lt;/strong&gt; storage package.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u19bnahoaerhsu2ztgv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u19bnahoaerhsu2ztgv.png" alt=" " width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, click the &lt;code&gt;Modify&lt;/code&gt; button in the &lt;strong&gt;Storage Space&lt;/strong&gt; interface to allocate &lt;strong&gt;70GB&lt;/strong&gt; of space to the &lt;strong&gt;TW-03 region Datadrive&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mg51k8xlrx65tdss1yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mg51k8xlrx65tdss1yo.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Datadrive Client Download
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Data Drive client&lt;/strong&gt; currently supports downloading models directly from HuggingFace to the corresponding Datadrive of different region.&lt;br&gt;
The process works as follows: using your local network, the client downloads HuggingFace model chunks locally, then synchronizes them to the Datadrive.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Data Drive client: &lt;a href="https://glows.ai/datadrive" rel="noopener noreferrer"&gt;Download here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow the tutorial: &lt;a href="https://docs.glows.ai/docs/datadrive-app" rel="noopener noreferrer"&gt;Download models from HuggingFace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Instance-based Storage
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Create an Instance
&lt;/h3&gt;

&lt;p&gt;This method requires creating an instance on &lt;strong&gt;Glows.ai&lt;/strong&gt;. Suppose you would use an &lt;strong&gt;NVIDIA GeForce RTX 4090 GPU&lt;/strong&gt; in the &lt;strong&gt;TW-03 region&lt;/strong&gt;, with the environment &lt;strong&gt;CUDA 12.8 Torch 2.8.0 Base&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e27arz3if6fbz59z68y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e27arz3if6fbz59z68y.png" alt=" " width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the instance is created, you can connect to it via &lt;strong&gt;SSH&lt;/strong&gt; or &lt;strong&gt;HTTP Port 8888 (JupyterLab)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs8icqazjdds7gtlo17k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs8icqazjdds7gtlo17k.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Download Model Using Commands
&lt;/h3&gt;

&lt;p&gt;JupyterLab is simple to use. The following example demonstrates operations within JupyterLab.&lt;br&gt;
Open a new &lt;strong&gt;Terminal&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxhrsdscxgpdzmfy1tgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxhrsdscxgpdzmfy1tgl.png" alt=" " width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the following command to install HuggingFace’s official model management tool &lt;strong&gt;huggingface_hub&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-U&lt;/span&gt; huggingface_hub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ln55jksy06dc95zvr9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ln55jksy06dc95zvr9m.png" alt=" " width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once installed, you can use the &lt;code&gt;hf&lt;/code&gt; command to download model files directly to the instance.&lt;br&gt;
For example, to download &lt;code&gt;openai/gpt-oss-20b&lt;/code&gt; into the &lt;code&gt;/gpt-oss-20b&lt;/code&gt; directory, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;hf download openai/gpt-oss-20b &lt;span class="nt"&gt;--local-dir&lt;/span&gt; /gpt-oss-20b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanfhkd4hbr49gf1sgepl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanfhkd4hbr49gf1sgepl.png" alt=" " width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Running HuggingFace Models on Glows.ai
&lt;/h2&gt;

&lt;p&gt;Some frameworks support directly loading and running HuggingFace models, such as &lt;strong&gt;Transformers&lt;/strong&gt;, &lt;strong&gt;SGLang&lt;/strong&gt;, and &lt;strong&gt;GPUStack&lt;/strong&gt;.&lt;br&gt;
You can use the software you’re most familiar with for deployment or refer to the tutorials below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://glows.ai/article/deepseek_r1_using_sglang" rel="noopener noreferrer"&gt;How to run DeepSeek-R1 on multiple machines with multiple GPUs using SGLang on Glows.ai&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://glows.ai/article/run_gpustack_on_glowsai" rel="noopener noreferrer"&gt;How to Run GPUStack on Glows.ai&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;HuggingFace&lt;/a&gt; official website also provides usage examples.&lt;br&gt;
If you have any questions or suggestions during your implementation on Glows.ai, feel free to contact us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpfyc7cj2iytoubdes15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpfyc7cj2iytoubdes15.png" alt=" " width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>machinelearning</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Run OpenClaw on Glows.ai</title>
      <dc:creator>Glows.ai</dc:creator>
      <pubDate>Mon, 23 Mar 2026 09:30:55 +0000</pubDate>
      <link>https://forem.com/glowsai/how-to-run-openclaw-on-glowsai-mb</link>
      <guid>https://forem.com/glowsai/how-to-run-openclaw-on-glowsai-mb</guid>
      <description>&lt;p&gt;This tutorial will walk you through how to rent an &lt;strong&gt;NVIDIA GeForce RTX 4090&lt;/strong&gt; GPU on Glows.ai to run OpenClaw with a local model, and share a safer and more convenient deployment approach.&lt;/p&gt;

&lt;p&gt;This article includes the following content:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to create an instance on Glows.ai&lt;/li&gt;
&lt;li&gt;How to configure a local model with OpenClaw&lt;/li&gt;
&lt;li&gt;How to connect to and use OpenClaw&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OpenClaw is an open-source AI Agent framework that has gained rapid popularity recently. It is designed to combine large language models with practical tool capabilities, enabling AI not only to answer questions but also to execute real tasks on computers or servers. It typically runs in a self-hosted manner and can be deployed on local devices or in cloud environments, while interacting with users through APIs or chat applications.&lt;/p&gt;

&lt;p&gt;Unlike traditional conversational AI, OpenClaw functions more like an intelligent assistant with action capabilities: the AI model is responsible for understanding goals and making decisions, while OpenClaw handles tool orchestration, command execution, and overall workflow management.&lt;/p&gt;

&lt;p&gt;Now let’s dive into a hands-on example together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an Instance
&lt;/h2&gt;

&lt;p&gt;We create an instance on demand on Glows.ai. You can refer to the &lt;a href="https://docs.glows.ai/docs/create-new" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt;. Please make sure to use the officially preconfigured &lt;strong&gt;OpenClaw&lt;/strong&gt;(img-6np58jp2) image.&lt;/p&gt;

&lt;p&gt;On the &lt;code&gt;Create New&lt;/code&gt; page, select Inference GPU -- 4090 for Workload Type, then choose the &lt;strong&gt;OpenClaw&lt;/strong&gt; image first. This image has already been preconfigured with the required environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq89bxfg76cnvocvhqkpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq89bxfg76cnvocvhqkpw.png" alt=" " width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Datadrive&lt;/strong&gt; is a cloud storage service provided by Glows.ai for users. Before creating an instance, users can upload the data, models, code, and other content they want to run to Datadrive. When creating the instance, simply click the &lt;code&gt;Mount&lt;/code&gt; button in the interface to mount Datadrive to the instance being created. This allows us to directly read and write Datadrive content from within the instance.&lt;/p&gt;

&lt;p&gt;In this tutorial, we only run inference services, so mounting Datadrive is not required.&lt;/p&gt;

&lt;p&gt;Once everything is ready, click &lt;code&gt;Complete Checkout&lt;/code&gt; in the lower-right corner to complete instance creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbgovdlnj5kxgae7grwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbgovdlnj5kxgae7grwl.png" alt=" " width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The estimated startup time for an &lt;strong&gt;OpenClaw&lt;/strong&gt; image instance is 30–60 seconds. We can view the instance status and related information in the &lt;code&gt;My Instances&lt;/code&gt; interface. After the instance starts successfully, we will see the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSH Port 22&lt;/strong&gt; is the SSH connection for the instance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Port 8888&lt;/strong&gt; is the JupyterLab connection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Port 11434&lt;/strong&gt; is the Ollama API connection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5i1cq5w5tc544nojxrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5i1cq5w5tc544nojxrg.png" alt=" " width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to the Instance
&lt;/h2&gt;

&lt;p&gt;Visit the link corresponding to &lt;code&gt;HTTP Port 8888&lt;/code&gt; on the instance page to open the JupyterLab service. Then follow the illustration to create a new Terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n7d6reaq5x95ywrlu97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n7d6reaq5x95ywrlu97.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrade OpenClaw
&lt;/h2&gt;

&lt;p&gt;Thanks to the open-source community, OpenClaw is iterating very quickly. You can upgrade to the latest version to obtain the full feature set.&lt;/p&gt;

&lt;p&gt;First, enter the following commands in the Terminal to check the version status. You will see the latest available version number.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw &lt;span class="nt"&gt;--version&lt;/span&gt;
openclaw update status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8o69pf0j6kig98utejx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8o69pf0j6kig98utejx3.png" alt=" " width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter the following command to update the version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppagg0zuas4923548w2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppagg0zuas4923548w2w.png" alt=" " width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reminder:&lt;/strong&gt; The final &lt;code&gt;systemctl&lt;/code&gt; command error can be ignored. This is because &lt;code&gt;systemctl&lt;/code&gt; is not supported inside Docker containers. The tutorial below will explain how to start the OpenClaw service manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic OpenClaw Configuration
&lt;/h2&gt;

&lt;p&gt;First, enter the following command to open the OpenClaw configuration interface, mainly for agreeing to the OpenClaw terms and configuring the Telegram API Token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw onboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iuskmttwrcjqikdyrxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iuskmttwrcjqikdyrxg.png" alt=" " width="692" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the default options. The model configuration can be left empty for now, as we will configure a local Ollama model later. You may also configure a third-party model API as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uje9d5k4sk29opgv4ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uje9d5k4sk29opgv4ii.png" alt=" " width="762" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure the communication software according to your needs. This tutorial uses Telegram as an example. First, you need to send the following command to &lt;code&gt;@BotFather&lt;/code&gt; on Telegram to create a new Bot and obtain the Bot API Token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/newbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flimj6f3by9dgtzh18jna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flimj6f3by9dgtzh18jna.png" alt=" " width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter the Telegram Bot API Token in the OpenClaw configuration interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz6l74je5lgv5rx3zx5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz6l74je5lgv5rx3zx5b.png" alt=" " width="655" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, when you see &lt;code&gt;Onboarding complete&lt;/code&gt;, the configuration is complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5l2lpnr11zxardat36h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5l2lpnr11zxardat36h.png" alt=" " width="610" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenClaw Local Model Configuration
&lt;/h2&gt;

&lt;p&gt;With Ollama, you only need a single command to download and configure a model for OpenClaw. The following command uses the &lt;code&gt;qwen3.5:4b&lt;/code&gt; model as an example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama launch openclaw &lt;span class="nt"&gt;--model&lt;/span&gt; qwen3.5:4b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg31dn74yt4st6tsn52ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg31dn74yt4st6tsn52ev.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After downloading the model, the OpenClaw service will start automatically. We can then begin using OpenClaw.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reminder:&lt;/strong&gt; Remember the link under &lt;code&gt;Open the Web UI&lt;/code&gt; here, especially the token. It will be needed later when we access the Web interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Entry Points for Interacting with OpenClaw
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Chat in the CLI
&lt;/h3&gt;

&lt;p&gt;After the previous command finishes running, you can directly chat with OpenClaw in the interface. For example, here we ask it to create a Python file and write the Fibonacci algorithm into it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr7inw50yi4orm32xfpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr7inw50yi4orm32xfpx.png" alt=" " width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Chat in Telegram
&lt;/h3&gt;

&lt;p&gt;First, send a message to the Telegram Bot created earlier. After a short while, you will receive an authentication command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskndht6bt75fe9l3v2k9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskndht6bt75fe9l3v2k9.png" alt=" " width="632" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the authentication command above and execute it in the instance Terminal. After execution is complete, you can formally use OpenClaw in Telegram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ungpk5hoatqh3khvfof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ungpk5hoatqh3khvfof.png" alt=" " width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To test its memory capability, we can directly ask it for the path of the Fibonacci algorithm file we asked it to write earlier. As shown in the screenshot, the Bot quickly replies with both the path and the source code content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xnqd5tp5uy1478arezt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xnqd5tp5uy1478arezt.png" alt=" " width="703" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Chat in a Web Browser
&lt;/h3&gt;

&lt;p&gt;For security reasons, OpenClaw binds its service host to &lt;code&gt;127.0.0.1&lt;/code&gt; by default. We need to use SSH port forwarding to forward the OpenClaw service port from the instance to a local computer port before accessing it.&lt;/p&gt;

&lt;p&gt;On the My Instance page in Glows.ai, you can view the SSH information. We need to convert the SSH Command into an SSH port forwarding command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscdg3p3i31k94bzbbngq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscdg3p3i31k94bzbbngq.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The modified command is as follows. You only need to replace these parts of the command.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;2xxx7&lt;/code&gt; with the corresponding value in the SSH Command&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;18888&lt;/code&gt; with any available port number on your local computer&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;root@tw-05.access.glows.ai&lt;/code&gt; with the corresponding value in the SSH Command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-p&lt;/span&gt; 2xxx7 &lt;span class="nt"&gt;-NL&lt;/span&gt; 18888:localhost:18789 root@tw-05.access.glows.ai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the modified SSH port forwarding command in your local Terminal/CMD. After pressing Enter, paste the SSH Password. It is normal that nothing is displayed after pasting the password. Press Enter again. If no errors appear, the message has been forwarded successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib9uskk1m5yeqenyzfls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib9uskk1m5yeqenyzfls.png" alt=" " width="800" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reminder:&lt;/strong&gt; If you are a Windows user, it is recommended to download and install &lt;a href="https://git-scm.com/install/windows" rel="noopener noreferrer"&gt;Windows Git&lt;/a&gt; locally.&lt;/p&gt;

&lt;p&gt;After the forwarding succeeds, visit &lt;code&gt;http://localhost:18888/#token=xxxxxxxx&lt;/code&gt; locally. Please replace the token value with the token displayed when starting the OpenClaw service in OpenClaw Local Model Configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kecfdnob33oeykh98h1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kecfdnob33oeykh98h1.png" alt=" " width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
