<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jess Lulka</title>
    <description>The latest articles on Forem by Jess Lulka (@jlulks).</description>
    <link>https://forem.com/jlulks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jlulks"/>
    <language>en</language>
    <item>
      <title>February 2026 DigitalOcean Tutorials: Claude 4.6 and AI Agents</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Thu, 05 Mar 2026 17:00:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/february-2026-digitalocean-tutorials-claude-46-and-ai-agents-14pn</link>
      <guid>https://forem.com/digitalocean/february-2026-digitalocean-tutorials-claude-46-and-ai-agents-14pn</guid>
      <description>&lt;p&gt;Whether you’ve found yourself exploring Anthropic’s latest Claude Opus 4.6 release or following along with the OpenClaw frenzy, &lt;a href="https://www.digitalocean.com/community/tutorials" rel="noopener noreferrer"&gt;DigitalOcean&lt;/a&gt; has tutorials and guides to help you get the most out of the latest AI advancements. &lt;/p&gt;

&lt;p&gt;These 10 tutorials from last month cover AI agent development, RAG troubleshooting, CUDA performance tuning, and OpenClaw on DigitalOcean. Bookmark them for later or keep them open among your 50 browser tabs to come back to.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/claude-opus" rel="noopener noreferrer"&gt;What’s New With Claude Opus 4.6&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Claude Opus 4.6’s agentic coding model feels less like a coding assistant and more like a collaborative engineer. Developers now have a massive 1M-token context window, which lets the model reason across entire codebases, docs, and long workflows without constantly re-prompting. This means faster refactors, more reliable debugging, and the ability to make iterative UI or architecture changes with just a few guided prompts. Long context plus agentic planning dramatically reduces the time between the idea and working implementation, especially when the model is directly integrated into your cloud stack. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskezjlwkt14l5zi8ddn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskezjlwkt14l5zi8ddn7.png" alt="Claude feature benchmarks" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/conceptual-articles/self-learning-ai-agents" rel="noopener noreferrer"&gt;Self-Learning AI Agents: A High-Level Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Self-learning agents follow a fundamental loop: observe, act, get feedback, and improve. For developers, these systems aren’t just prompt-driven. They’re built around policies, reward signals, and evolving memory. We make the concept approachable by showing how you can prototype simple versions with standard Python ML tooling. This tutorial can help you determine whether your agent needs to adapt to changing environments or user behavior. You’ll also get a look at how reinforcement-style learning and persistent memory become essential design choices.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/cuda-performance-tuning-workflow" rel="noopener noreferrer"&gt;CUDA Guide: Workflow for Performance Tuning&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Frustrated by the guesswork involved in GPU optimization? We’ve got a step-by-step guide for you. Learn how to profile first, identify the real bottleneck—memory, compute, or occupancy—and then apply targeted optimizations rather than random tweaks. For developers working with AI or HPC workloads, the biggest win is understanding that most performance gains come from a structured workflow, not exotic kernel tricks. You’ll learn that knowing how to measure, optimize, and re-measure is the only reliable path to predictable CUDA speedups.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/build-ai-agents-the-right-way" rel="noopener noreferrer"&gt;A Simple Guide to Building AI Agents Correctly&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This tutorial is a production blueprint for agentic systems. It covers why naive agent loops fail—runaway costs, hallucinated tool calls, and silent errors—and provides a modular architecture that includes an orchestrator, structured tools, memory, guardrails, and full observability. The most valuable takeaway for real deployments is the “start with the least autonomy” principle: Use deterministic workflows first, and add agent behavior only where it’s truly needed. You want to treat agents like serious software systems with testing, logging, and permissions, not clever prompt chains to get them running correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3zrevpc014q94t6c3kn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3zrevpc014q94t6c3kn.png" alt="AI agent workflow " width="800" height="845"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/rag-not-working-solutions" rel="noopener noreferrer"&gt;Why Your RAG Is Not Working Effectively&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If your RAG app feels inaccurate or inconsistent, this tutorial helps you diagnose the real cause; it’s usually retrieval quality, chunking strategy, or missing evaluation rather than the model itself. You’ll walk through concrete fixes like better indexing, query rewriting, and relevance filtering so your system actually returns grounded answers. The key takeaway is that RAG performance is mostly a data-pipeline and retrieval-engineering problem, not an LLM problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/connect-google-to-openclaw" rel="noopener noreferrer"&gt;How to Connect Google to OpenClaw&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you’re looking for how to connect AI assistants to real-time data, this guide shows how to wire external data sources into your agent workflow so it can act on real user content instead of static prompts. The practical win is learning how authentication, connectors, and permissions shape what your agent can safely do in production. You'll learn how to deploy OpenClaw on a DigitalOcean Droplet and connect it to Google services like Gmail, Calendar, and Drive using OAuth authentication.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/openclaw-next-steps" rel="noopener noreferrer"&gt;So You Installed OpenClaw on a DigitalOcean Droplet. Now What?&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We’ve penned plenty of resources on how to get started with OpenClaw on DigitalOcean (&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-run-openclaw" rel="noopener noreferrer"&gt;how to run it&lt;/a&gt; and how we built a &lt;a href="https://www.digitalocean.com/blog/technical-dive-openclaw-hardened-1-click-app" rel="noopener noreferrer"&gt;security-hardened Droplet&lt;/a&gt;). This follow-up focuses on moving from a working prototype to a more capable, extensible system. You learn how to layer in new tools, expand automation flows, and structure your project so it scales beyond a demo. The key takeaway is architectural: design your agent environment so new capabilities are plug-and-play rather than requiring rewrites.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/effective-context-engineering-ai-agents" rel="noopener noreferrer"&gt;Effective Context Engineering to Build Better AI Agents&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The prompts you feed your AI agent matter just as much as the model behind it. Instead of cramming everything into a single prompt, this article shows you how to structure memory, retrieval, tool outputs, and task state so the model always sees the right information at the right time. You’ll see how using enough context is your real control surface for agent reliability, latency, and cost. Good context engineering often beats switching to a larger model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5wiwv68w05r4jzn6l5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5wiwv68w05r4jzn6l5h.png" alt="Context engineering workflow" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/sliding-window-attention-efficient-long-context-models" rel="noopener noreferrer"&gt;Sliding Window Attention: Efficient Long-Context Modeling&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Sliding window attention makes long-context transformers far more practical by limiting how many tokens each position can “see.” Instead of every token attending to every other token (which gets expensive fast), the model focuses on a fixed local window—cutting compute costs from quadratic to linear growth. You’ll get a breakdown of how this works, how modern variants improve positional awareness, and why it’s especially useful for long documents, extended chat histories, or agent memory systems. Smarter attention design—not just bigger models—is what makes long-context AI scalable.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>agents</category>
      <category>learning</category>
    </item>
    <item>
      <title>How to Run Open-Weight Nemotron 3 Models on a GPU Droplet</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Tue, 03 Mar 2026 18:41:16 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-run-open-weight-nemotron-3-models-on-a-gpu-droplet-a48</link>
      <guid>https://forem.com/digitalocean/how-to-run-open-weight-nemotron-3-models-on-a-gpu-droplet-a48</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was original written by Andrew Dugan (Senior AI Technical Content Creator II)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nvidia.com/en-us/" rel="noopener noreferrer"&gt;NVIDIA&lt;/a&gt; has announced the newest additions to their Nemotron family of models, &lt;a href="https://research.nvidia.com/labs/nemotron/Nemotron-3/" rel="noopener noreferrer"&gt;Nemotron 3&lt;/a&gt;. There are three separate models in the Nemotron 3 family that are being released, including Nano, Super, and Ultra, which have 30, 49, and 253 billion parameters respectively with up to 1M tokens in context length. Nano was released in December of 2025, and Super and Ultra are scheduled to be released later in 2026. They are being released on NVIDIA’s &lt;a href="https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/" rel="noopener noreferrer"&gt;open model license&lt;/a&gt;, making them available for commercial use and modification and giving you ownership and complete control over generated outputs. Both the weights and &lt;a href="https://huggingface.co/nvidia/datasets?search=nemotron" rel="noopener noreferrer"&gt;training data&lt;/a&gt; are open and available on &lt;a href="https://huggingface.co/nvidia/collections?search=nemotron" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;. This tutorial will discuss these models and how to deploy the currently available Nano on a &lt;a href="https://www.digitalocean.com/products/gradient/gpu-droplets" rel="noopener noreferrer"&gt;DigitalOcean GPU Droplet&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NVIDIA has announced Nemotron 3, a new addition to their Nemotron model lineup. Nemotron 3 consists of three new models, Nano (30B), Super (49B), and Ultra (253B).&lt;/li&gt;
&lt;li&gt;As of January, 2026, the smallest model, Nano, is the only one currently available for use. Super and Ultra are scheduled for release later in 2026.&lt;/li&gt;
&lt;li&gt;All of the models are open-weight, allowing for open access for commercial use and modification. The models’ architectures employ novel efficiency improvements to increase model throughput.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Model Overviews
&lt;/h2&gt;

&lt;p&gt;The Nemotron 3 models use a &lt;a href="https://arxiv.org/html/2503.07137v1" rel="noopener noreferrer"&gt;Mixture of Experts&lt;/a&gt; hybrid &lt;a href="https://arxiv.org/abs/2312.00752" rel="noopener noreferrer"&gt;Mamba-Transformer&lt;/a&gt; architecture that is meant to increase the token generation speed, otherwise known as &lt;code&gt;throughput&lt;/code&gt;. This means that the models have fewer layers of self-attention and instead use Mamba-2 (state space model) layers and Mixture-of-Experts (MoE) layers that are computationally less expensive and faster, especially for longer input sequences. This allows the Nemotron 3 models to process longer texts faster while using less memory and resources. Some attention layers are included where needed to keep accuracy as high as possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi53w68g3c62j7p2stjrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi53w68g3c62j7p2stjrx.png" alt="Nemotron Attention" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NVIDIA describes each of the three models as optimized for different platforms. &lt;a href="https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16" rel="noopener noreferrer"&gt;Nano&lt;/a&gt; provides cost efficiency for targeted agentic tasks without sacrificing accuracy. Super offers high accuracy for multi-agentic reasoning. Ultra maximizes reasoning accuracy.&lt;/p&gt;

&lt;p&gt;Nano is the smallest of the three and is comparable to &lt;a href="https://huggingface.co/Qwen/Qwen3-30B-A3B" rel="noopener noreferrer"&gt;Qwen3-30B&lt;/a&gt; and &lt;a href="https://huggingface.co/openai/gpt-oss-20b" rel="noopener noreferrer"&gt;GPT-OSS-20B&lt;/a&gt; in performance. It is the only of the three that is available as of January 2026.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpn3rwv20vay9hed6ic2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpn3rwv20vay9hed6ic2i.png" alt="Nemotron 3 Nano benchmarks" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nano can be used for both reasoning and non-reasoning tasks with an option to turn off the reasoning capabilities through a flag in the chat template. Responses will be less accurate if reasoning is disabled in the configuration.&lt;/p&gt;

&lt;p&gt;Nano is a hybrid Mixture-of-Experts (MoE) architecture that consists of 23 Mamba-2 and MoE layers and six attention layers, with each MoE layer including 128 experts plus one shared expert. Five experts are activated per token, making 3.5 billion of the 30 billion total parameters active.&lt;/p&gt;

&lt;p&gt;Super and Ultra both use &lt;code&gt;LatentMoE&lt;/code&gt; and &lt;a href="https://arxiv.org/pdf/2404.19737" rel="noopener noreferrer"&gt;Multi-Token Prediction&lt;/a&gt; (MTP) layers that further increase text generation speed. MTP is the ability to predict multiple tokens at once in a single forward pass instead of only predicting a single token. LatentMoE is a novel approach to assigning experts that compresses the input data size each expert needs to process in order to reduce the amount of computation for each token. They use these efficiency savings to increase the number of experts that can be used for each token.&lt;/p&gt;

&lt;p&gt;In NVIDIA’s &lt;a href="https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-White-Paper.pdf" rel="noopener noreferrer"&gt;white paper&lt;/a&gt;on the release, they describe Super as optimized for workloads like IT ticket automation where collaborative agents handle large-volume workloads. Ultra is the option to use when accuracy and reasoning performance are paramount.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 - Creating a GPU Droplet
&lt;/h2&gt;

&lt;p&gt;To deploy the Nemotron 3 Nano on a DigitalOcean GPU Droplet, first, sign in to your DigitalOcean account and create a GPU Droplet.&lt;/p&gt;

&lt;p&gt;Choose AI/ML-Ready as your image and select an NVIDIA H100. Add or select an SSH Key, and &lt;a href="https://cloud.digitalocean.com/registrations/new?activation_redirect=%2Fgpus%2Fnew&amp;amp;redirect_url=%2Fgpus%2Fnew" rel="noopener noreferrer"&gt;create the DigitalOcean Droplet&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 - Connecting to Your GPU Droplet
&lt;/h2&gt;

&lt;p&gt;Once the DigitalOcean Droplet is created, you &lt;a href="https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys" rel="noopener noreferrer"&gt;SSH&lt;/a&gt;(Secure Shell) into your server instance. Go to your command line and enter the following command, replacing the highlighted &lt;code&gt;your_server_ip&lt;/code&gt; placeholder value with the Public IPv4 of your instance. You can find the IP in the &lt;code&gt;Connection Details&lt;/code&gt; section of your GPU Instance Dashboard.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@your_server_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may get a message that reads:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OutputThe authenticity of host 'your_server_ip (your_server_ip)' can't be established.....Are you sure you want to continue connecting (yes/no/[fingerprint])?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do, you can type &lt;code&gt;yes&lt;/code&gt; and press &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 — Installing Python and vLLM
&lt;/h2&gt;

&lt;p&gt;Next, verify you are still in the Linux instance, and install Python.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install python3 python3-pip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It may notify you that additional space will be used and ask if you want to continue. If it does, type &lt;code&gt;Y&lt;/code&gt; and press &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you receive a “Daemons using outdated libraries” message asking which services to restart, you can press &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After Python has finished installing, &lt;a href="https://docs.vllm.ai/en/latest/" rel="noopener noreferrer"&gt;install vLLM&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install vllm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This package might take a little while to install. After it is finished installing, download the custom parser from Hugging Face.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/resolve/main/nano_v3_reasoning_parser.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The custom parser interprets Nemotron-3 Nano v3’s reasoning and tool-calling markup so vLLM can correctly serve responses and route tool calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 - Serving the Nemotron Model
&lt;/h2&gt;

&lt;p&gt;Specify exactly which model you want to serve using the model’s ID from Hugging Face.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vllm serve --model nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
 --max-num-seqs 8 \
  --tensor-parallel-size 1 \
  --max-model-len 262144 \
  --port 8000 \
  --trust-remote-code \
  --reasoning-parser-plugin nano_v3_reasoning_parser.py \
  --reasoning-parser nano_v3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;max-num-seqs&lt;/code&gt; is the maximum number of outputs that can be processed concurrently. You can have up to eight single-output requests processed at a single time in this example.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;tensor-parallel-size&lt;/code&gt; is the number of GPUs you are spreading the model across via tensor parallelism. One is equal to a single GPU. The &lt;code&gt;max-model-len&lt;/code&gt; is the maximum total tokens per request. &lt;code&gt;trust-remote-code&lt;/code&gt; is necessary for Nemotron’s custom chat template and parsing logic.&lt;/p&gt;

&lt;p&gt;Finally, the &lt;code&gt;reasoning-parser-plugin&lt;/code&gt; and &lt;code&gt;reasoning-parser&lt;/code&gt; parameters load and select the custom reasoning parser.&lt;/p&gt;

&lt;p&gt;Once the model is loaded and served on your instance with vLLM, you can make inference calls to the endpoint using Python locally or from another server. Create a Python file called &lt;code&gt;example_vllm_request.py&lt;/code&gt; and run the following code. Replace &lt;code&gt;your_server_ip&lt;/code&gt; with the IP address of your GPU Droplet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

url = "http://your_server_ip:8000/v1/completions"
data = {
    "model": "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16",
    "messages": [{"role": "user", "content": "What is the capital of France?"}],
    "max_tokens": 1000
}

response = requests.post(url, json=data)
message = response.json()['choices'][0]['message']['content']
print(message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
The capital of France is **Paris**.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you print out the entire &lt;code&gt;response.json()&lt;/code&gt; object, you can view the reasoning tokens. If you would like to run it with reasoning disabled, you can add a &lt;code&gt;chat_template_kwargs&lt;/code&gt; parameter to the data object above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data = {
    "model": "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16",
    "messages": [{"role": "user", "content": "What is the capital of France?"}],
    "max_tokens": 1000,
    "chat_template_kwargs": {"enable_thinking": False},
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What are the hardware requirements and GPU memory needed to run Nemotron 3 Nano locally?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nemotron 3 Nano can run on GPUs with at least 60 GB of VRAM in BF16 precision, such as an A100 80 GB or &lt;a href="https://www.digitalocean.com/community/tutorials/what-is-an-nvidia-h100" rel="noopener noreferrer"&gt;H100&lt;/a&gt;. A quantized version may allow it to run on GPUs with less memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I fine-tune Nemotron 3 Nano on my own data, and what are the licensing implications?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, Nemotron 3 models are released under NVIDIA’s open model license, which permits commercial use, modification, and fine-tuning. You retain complete ownership of any outputs generated and can fine-tune the model on custom datasets for your specific use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the difference between the Mixture-of-Experts (MoE) architecture and traditional transformer models in terms of inference cost?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The MoE architecture only activates five out of 128 experts per token (3.5B of 30B parameters), making inference much more efficient than traditional dense models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Has NVIDIA released other LLMs in the past?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, NVIDIA has released a large number of open models and datasets, including other Nemotron model versions, Megatron, ASR models, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Nemotron 3 family is a comparatively effective and efficient open model with fast inference and accurate results. The smallest version, Nano, is currently available as of January 2026, and the other two larger versions will become available in coming months.&lt;/p&gt;

&lt;p&gt;In this tutorial, you deployed Nemotron 3 Nano on a DigitalOcean GPU Droplet. Next, you can build a workflow that uses it for any data sensitive applications that require a high degree of privacy and control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/mistral-3-models" rel="noopener noreferrer"&gt;Mistral 3 Models on DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-build-parallel-agentic-workflows-with-python" rel="noopener noreferrer"&gt;How to Build Parallel Agentic Workflows with Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/run-gpt-oss-vllm-amd-gpu-droplet-rocm" rel="noopener noreferrer"&gt;Run gpt-oss 120B on vLLM with an AMD Instinct MI300X GPU Droplet&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nvidia</category>
      <category>ai</category>
      <category>learning</category>
      <category>aimodels</category>
    </item>
    <item>
      <title>Technical Deep Dive: How we Created a Security-hardened 1-Click Deploy OpenClaw</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Tue, 24 Feb 2026 22:11:29 +0000</pubDate>
      <link>https://forem.com/digitalocean/technical-deep-dive-how-we-created-a-security-hardened-1-click-deploy-openclaw-4b99</link>
      <guid>https://forem.com/digitalocean/technical-deep-dive-how-we-created-a-security-hardened-1-click-deploy-openclaw-4b99</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally written by Freddie Rice (Staff Product Security Engineer)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/resources/articles/what-is-moltbot" rel="noopener noreferrer"&gt;OpenClaw, an open source AI assistant&lt;/a&gt; (&lt;a href="https://x.com/openclaw/status/2017103710959075434" rel="noopener noreferrer"&gt;recently renamed from Moltbot&lt;/a&gt;, and earlier Clawdbot), has exploded in popularity over the last few days, and at DigitalOcean we immediately wondered “how can we enable more people to try this new technology safely and easily?” We noticed that there was a lot of interest by folks looking to use this software, but also that there was concern around the security of the open source software, especially when connecting it directly to users’ own machines. We dug in to find a way to deliver this software to our customers as fast as possible, as easily as possible and as safe as possible.&lt;/p&gt;

&lt;p&gt;At DigitalOcean, our &lt;a href="https://marketplace.digitalocean.com/apps/moltbot" rel="noopener noreferrer"&gt;1-Click Deploy OpenClaw&lt;/a&gt; (formerly 1-Click Deploy Moltbot) through our Marketplace enables us to package the latest and greatest software configured for our Droplet® server, and make it easily available to customers. Creating our 1-Click Deploy OpenClaw was the natural next step in getting this to our customers.&lt;/p&gt;

&lt;p&gt;Toying around with OpenClaw on a local machine is fun, but it could severely impact the ability to deploy and use the software for longer term use and may not meet the safe environment that you need. Some of the benefits to deploying on DigitalOcean include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always available – the service is available to customers via the web&lt;/li&gt;
&lt;li&gt;Easy to connect to it – Droplets have a static ip address&lt;/li&gt;
&lt;li&gt;Vertical scalability – scale up CPUs, memory, and disk storage with higher workloads&lt;/li&gt;
&lt;li&gt;Cognitive overload – start with basic configs, tweak the ones that matter to you&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We made a lot of changes as we built the 1-Click Deploy OpenClaw, but the main elements we focused on were&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do we communicate with the service safely?&lt;/li&gt;
&lt;li&gt;How do we keep the agentic code isolated from the rest of the system?&lt;/li&gt;
&lt;li&gt;How do we prevent attacks from the wider internet?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of that while providing a straightforward deployment UX to our customers! Let’s dig in…&lt;/p&gt;

&lt;h3&gt;
  
  
  Delivering an Image with Safe Defaults
&lt;/h3&gt;

&lt;p&gt;Our priority in creating a 1-Click Deploy OpenClaw on our Droplet was twofold: First, speed, as we wanted to get something out quickly to our users. Second was providing a solution that provided additional security benefits. These are the actions we took to meet those goals:&lt;/p&gt;

&lt;h3&gt;
  
  
  Keeping deployments consistent (DevOps)
&lt;/h3&gt;

&lt;p&gt;We saw that there are multiple ways to deploy the software – we chose the most consistent path, which was picking a stable release from the Git repository on GitHub, pulling it and building from there.&lt;/p&gt;

&lt;p&gt;Why not pull the latest and greatest from main? Changes are happening at a rapid pace, which is awesome for feature development but can come at the expense of stability. Depending on the minute we build our 1-click image, we could get a working version or a broken version.&lt;/p&gt;

&lt;p&gt;So we make sure that we can deliver the latest stable version.&lt;/p&gt;

&lt;h3&gt;
  
  
  TLS (Keep communications safe and auditable)
&lt;/h3&gt;

&lt;p&gt;Once we had a Packer image that we could iterate on, we applied our security best practices for the 1-clicks to set up TLS. This is a crucial step to make sure that our customers can communicate with the bot in a safe way that doesn’t allow eavesdropping.&lt;/p&gt;

&lt;p&gt;Our best practices consist of using Caddy as a reverse proxy with a TLS certificate issued by LetsEncrypt. Caddy ensures that the service is deployed externally is the service we want to publish and provides a safe channel with which to serve it. Furthermore, Caddy outputs logs to a location that can be audited after the fact, allowing the end user to see how their service is actually being used.&lt;/p&gt;

&lt;p&gt;A new UX improvement we added to this image is seamless TLS configuration with LetsEncrypt via IP addresses without requiring a domain name! While OpenClaw spins up, Caddy is requesting a new certificate on your behalf, no configuration required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authz (Gateway Key + Pairing)
&lt;/h3&gt;

&lt;p&gt;How do we know that the requests are coming from you? We have a OpenClaw gateway key in place to make sure that the user who is supposed to use the platform is the correct one.&lt;/p&gt;

&lt;p&gt;Next, we leaned into a feature that OpenClaw provides called “Paring” – this exists to make sure that the devices that are going to communicate with the main server are the trusted ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sandboxing (keep safe from Agents)
&lt;/h3&gt;

&lt;p&gt;Part of the configuration is an Anthropic / OpenAI / Model key – these are sensitive pieces of material that are required in order to allow the software to function! So how do we let agents that can run arbitrary code on the machine, not read and abuse these tokens?&lt;/p&gt;

&lt;p&gt;Furthermore, how do we stop the agents from potentially destroying the machine itself?&lt;/p&gt;

&lt;p&gt;Luckily, there is a configuration available that puts the agent deployments into their own containers. If an agent blows up, it will destroy its own ephemeral docker container, but the host filesystem will still be safe from incorrect agentic modifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Safe Defaults
&lt;/h3&gt;

&lt;p&gt;These boxes are taking the best configurations that we implement for all of our 1-clicks, including but not limited to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail2ban&lt;/strong&gt; – Make sure that the background noise of the internet doesn’t cause disruptions to your droplet. It does this by monitoring the logs of failed requests to the system and dynamically updating firewall rules to block known bad patterns on the internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unattended upgrades&lt;/strong&gt; – We want to make sure that your Droplet is always up to date. We have Ubuntu configured with unattended upgrades that periodically will check for vulnerable packages and automatically patch them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Constraints and Upcoming Features
&lt;/h2&gt;

&lt;p&gt;To ensure a stable and repeatable installation, we utilize Packer for our image provisioning; however, during testing, we found that smaller Droplet configurations consistently encountered out-of-memory errors during the snapshot creation process. While this currently necessitates a minimum $24/month Droplet size to match the snapshot’s disk and memory requirements, we chose to prioritize getting this tool into your hands today rather than delaying for further optimization. We are already iterating on the image to reduce its footprint and support lower-cost tiers, and in the spirit of transparency, we have &lt;a href="https://github.com/digitalocean/droplet-1-clicks" rel="noopener noreferrer"&gt;made our Packer scripts public&lt;/a&gt; so you can audit the provisioning process and gain confidence in the one-click experience. We are also working to quickly add support for all DigitalOcean Gradient AI models, including OpenAI, add auto provisioning of Gradient AI API Key and injecting for the user, and more updates as OpenClaw evolves over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  After deploy (make it yours!)
&lt;/h3&gt;

&lt;p&gt;1-Click Deploy OpenClaw is a great launch point, but OpenClaw is infinitely customizable once up and running in the Droplet. Choose which messaging platforms are the best fit for your workflows, and get chatting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started with the 1-Click Deploy OpenClaw
&lt;/h2&gt;

&lt;p&gt;Get started with the &lt;a href="https://marketplace.digitalocean.com/apps/moltbot" rel="noopener noreferrer"&gt;1-Click Deploy OpenClaw by visiting the Marketplace&lt;/a&gt;, and &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-run-moltbot" rel="noopener noreferrer"&gt;follow this tutorial&lt;/a&gt; for step by step instructions on how to get started.&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>digitalocean</category>
      <category>security</category>
      <category>ai</category>
    </item>
    <item>
      <title>4 AI Models (That aren’t Opus 4.6) on Our Minds This Week</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Mon, 23 Feb 2026 19:59:44 +0000</pubDate>
      <link>https://forem.com/digitalocean/4-ai-models-that-arent-opus-46-on-our-minds-this-week-h1l</link>
      <guid>https://forem.com/digitalocean/4-ai-models-that-arent-opus-46-on-our-minds-this-week-h1l</guid>
      <description>&lt;p&gt;So many models, so little time. Today, we’re bringing our attention to some super cool releases from Qwen, MiniCPM-o, ACE-Step, and GLM-OCR. So what can these models do?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/Qwen/Qwen3-Coder-Next" rel="noopener noreferrer"&gt;Qwen3-Coder-Next&lt;/a&gt;: An open-weight model built for coding agents and local development that speeds up deployments just as well as more compute-hungry models. By activating just 3B parameters out of 80B total, the model can rival models that require far more compute, making large-scale deployment markedly more economical. The model is also trained for durable agent behavior, including long-horizon reasoning, sophisticated tool use, and recovery from failed executions, and, with a 256k context window plus flexible scaffold support, is designed to integrate smoothly into a wide range of existing CLI and IDE workflows.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/openbmb/MiniCPM-o-4_5" rel="noopener noreferrer"&gt;MiniCPM-o 4.5&lt;/a&gt;: A game-changer for vision performance. The most advanced release in the MiniCPM-o line, packaging a 9B-parameter end-to-end architecture derived from SigLip2, Whisper-medium, CosyVoice2, and Qwen3-8B while adding full-duplex multimodal streaming. The model delivers leading vision performance that rivals or surpasses much larger proprietary systems, supports unified instruction and reasoning modes, and enables natural bilingual real-time speech with expressive voices, cloning, and role play. A major addition is simultaneous video/audio input with concurrent text and speech output, allowing the system to see, listen, talk, and even act proactively in live scenarios. It further strengthens OCR and document understanding, handles high-resolution images and high-FPS video efficiently, supports 30+ languages, and is easy to deploy across local and production environments through broad tooling, quantization options, and ready-to-run inference frameworks.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/ACE-Step/Ace-Step1.5" rel="noopener noreferrer"&gt;ACE-Step 1.5&lt;/a&gt;: An open-source and legally compliant music foundation model built to deliver commercial-grade generation on everyday hardware, enabling creators to safely use outputs in professional projects. Trained on a large, legally compliant mix of licensed, royalty-free, and synthetic data, the system can produce complete songs in seconds while running locally on GPUs with under 4GB of VRAM. Its hybrid design uses a language model as an intelligent planner that turns prompts into detailed musical blueprints—covering structure, lyrics, and metadata—which are realized by a diffusion transformer, aligned through intrinsic reinforcement learning rather than external reward models. Beyond raw synthesis, ACE-Step v1.5 supports fine stylistic control, multilingual prompting, and flexible editing workflows such as covers, repainting, and vocal-to-instrumental conversion.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/zai-org/GLM-OCR" rel="noopener noreferrer"&gt;GLM-OCR&lt;/a&gt;: A multimodal system for advanced document understanding built on the GLM-V encoder–decoder framework. To boost learning efficiency, accuracy, and transferability, it incorporates Multi-Token Prediction (MTP) objectives together with a stable, end-to-end reinforcement learning strategy across tasks. The architecture combines a CogViT visual backbone pre-trained on large image-text corpora, a streamlined cross-modal bridge that aggressively downsamples tokens for efficiency, and a GLM 0.5B language decoder for text generation. Paired with a two-stage workflow, layout parsing followed by parallel recognition using PP-DocLayout-V3, the model achieves reliable, high-fidelity OCR results across a wide spectrum of complex document structures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They may not have the marketing dazzle of Anthropic’s flagship model, but these four have an incredible amount of potential to help clear some vexing development issues. What models are you keeping an eye on? Add them in the comments. &lt;/p&gt;

</description>
      <category>aimodels</category>
      <category>qwen</category>
      <category>learning</category>
      <category>huggingface</category>
    </item>
    <item>
      <title>How to Lower Your AI Costs When Scaling Your Business</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Fri, 20 Feb 2026 20:31:09 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-lower-your-ai-costs-when-scaling-your-business-4i9k</link>
      <guid>https://forem.com/digitalocean/how-to-lower-your-ai-costs-when-scaling-your-business-4i9k</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/yTfkZ-Eusc8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;As AI adoption grows, technological maintenance isn’t the only component you need to keep up with; your budget also requires a watchful eye. Especially when inference workloads can scale data—and costs—quickly. Your AI inference bill comes down to three things: the hardware you use, the scale you need, and how fast it generates output.&lt;/p&gt;

&lt;p&gt;If you’re curious how you can lower LLM inference spending, here are three tips to reduce your overall AI costs as you scale:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Diversify your hardware
&lt;/h2&gt;

&lt;p&gt;Hardware is a major reason AI has historically been expensive: the only processing units available to run these workloads are GPUs, and demand exceeded supply (driving up costs). This is true for consumer-grade GPUs, where it's not uncommon to see prices two or three times above MSRP, and data center GPU scarcity is even worse.&lt;/p&gt;

&lt;p&gt;For a long time, NVIDIA held a large market share with its physical hardware and &lt;a href="https://www.digitalocean.com/community/tutorials/intro-to-cuda" rel="noopener noreferrer"&gt;compute unified device architecture (CUDA)&lt;/a&gt;-only frameworks. AMD has since introduced open-source ROCm and made it easier for teams to expand the hardware types they can use for their AI workloads, increasing GPU supply and reducing vendor lock-in. &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Configuration (Model + KV cache) and quantization
&lt;/h2&gt;

&lt;p&gt;When running LLM inference, pay attention to GPU capacity and speed, as they affect overall performance. You need a minimum amount of memory to even load and run a model. Additional capacity beyond that allows you to have a &lt;a href="https://www.youtube.com/shorts/-hv0a_EXWuQ" rel="noopener noreferrer"&gt;bigger KV cache&lt;/a&gt;, which is critical to high-throughput performance; the KV cache stores the history of each conversation for each user that the GPU is currently serving. Without it, your token generation becomes slower, and inference speed slows down. With it, you can serve more users at once and keep token generation steady. &lt;/p&gt;

&lt;p&gt;Beyond using a KV cache and optimizing your model, consider quantization. This practice reduces precision, so less memory (or VRAM) is required to store tokens. A 5000-token conversation, for example, will take several gigabytes of GPU memory (VRAM) to store. These gigabytes contain a massive amount of numbers that the GPU reuses during inference. Each of these numbers requires 2 bytes of memory when using the default 16-bit precision. With 8-bit precision, you only need 1 byte to store the same number of tokens and reduce the overall memory requirements by half. Though your hardware must support 8-bit models for this to work effectively. &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Optimize your parallelism setup
&lt;/h2&gt;

&lt;p&gt;AI production workloads are massive and require gigabytes (or even terabytes) to just load models. Even if you could load a single model onto a GPU that supports 8-bit models, there’s no guarantee that you would be able to successfully have enough memory to run your model and associated activations (calculations the LLM does during inference) on just one GPU. This is where &lt;a href="https://www.digitalocean.com/community/tutorials/splitting-llms-across-multiple-gpus" rel="noopener noreferrer"&gt;tensor parallelism&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/conceptual-articles/data-parallelism-distributed-training" rel="noopener noreferrer"&gt;data parallelism&lt;/a&gt; improve performance. &lt;/p&gt;

&lt;p&gt;When you spread your LLM models across multiple GPUs, you reduce the overall calculations (and memory) required per GPU, leaving plenty of room for activations and the KV cache. If you choose to apply this technique, consider the technical overhead of GPU data coordination and synchronization. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you’re curious to see a practical application of these techniques, you can read our full &lt;a href="https://www.digitalocean.com/blog/technical-deep-dive-character-ai-amd" rel="noopener noreferrer"&gt;Character.ai case study&lt;/a&gt; for a technical deep dive. With these workflows in place, the company reduced its inference costs by 50% while continuing to support an app with 10s of millions of users.&lt;/em&gt;   &lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>inference</category>
    </item>
    <item>
      <title>How to Run OpenClaw with DigitalOcean</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Tue, 10 Feb 2026 18:45:07 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-run-openclaw-with-digitalocean-3mpb</link>
      <guid>https://forem.com/digitalocean/how-to-run-openclaw-with-digitalocean-3mpb</guid>
      <description>&lt;p&gt;&lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; (formerly known as Moltbot and Clawdbot) is an open-source, self-hosted personal AI assistant that can run directly on your computer. It can execute a variety of tasks, such as managing your calendar, browsing the web, organizing files, managing your email, and running terminal commands. It supports any Large Language Model (LLM), and you can communicate with it through standard chat apps that you already use like WhatsApp, iMessage, Telegram, Discord, or Slack.&lt;/p&gt;

&lt;p&gt;While it is technically possible to run OpenClaw on your local machine, security concerns arise when giving an AI agent open access to your computer with all of your personal data on it. A better approach is to deploy it on a separate machine specifically for OpenClaw or to deploy it on a cloud server.&lt;/p&gt;

&lt;p&gt;There are 3 ways to deploy OpenClaw with DigitalOcean. &lt;a href="https://www.digitalocean.com/community/tutorials/moltbot-quickstart-guide" rel="noopener noreferrer"&gt;You can either deploy yourself on a DigitalOcean Droplet&lt;/a&gt;, deploy with a pre-built 1-Click Application in the Droplet Marketplace, or use the DigitalOcean App Platform. Each of these options will have different security and maintenance considerations, so choose an option based on your app’s needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to choose each deployment option
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bare DigitalOcean Droplet&lt;/strong&gt;: Deploy directly on a DigitalOcean Droplet only if you require full control over server configuration and are comfortable managing security hardening manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1-Click Application&lt;/strong&gt;: A 1-Click Application is best for solo developers who want improved security benefits with a fast, self-contained deployment with maximal control, and minimal abstraction. This requires minimal decisions and setup. It is a great option for fast experimentation, but it is not as scalable as the App Platform option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App Platform&lt;/strong&gt;: The App Platform is best for teams with a production-level deployment that requires long-term operational maturity. For example, if you need to scale quickly (horizontal auto-scaling), need operational consistency with automatic restarts, zero-downtime deploys, or sleep mode requirements for cost optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both the 1-Click Application and the App Platform deployment options will be covered in this tutorial below. If you prefer to manually deploy OpenClaw on a DigitalOcean Droplet without a 1-Click Application or the App Platform, you can follow the &lt;a href="https://www.digitalocean.com/community/tutorials/moltbot-quickstart-guide" rel="noopener noreferrer"&gt;Quickstart Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The DigitalOcean 1-Click and App Platform deployments handle many of the security best practices for you automatically. These security enhancements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authenticated communication&lt;/strong&gt;: Droplets generate an OpenClaw gateway token, so communication with your OpenClaw is authenticated, essentially protecting your instance from unauthorized users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardened firewall rules&lt;/strong&gt;: Droplets harden your server with default firewall rules that rate-limit OpenClaw ports to prevent inappropriate traffic from interfering with your OpenClaw use and to help prevent denial-of-service attacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-root user execution&lt;/strong&gt;: Droplets run OpenClaw as a non-root user on the server, limiting the attack surface if an inappropriate command is executed by OpenClaw.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker container isolation&lt;/strong&gt;: Droplets run OpenClaw inside Docker containers on your server, setting up an &lt;a href="https://docs.openclaw.ai/gateway/sandboxing" rel="noopener noreferrer"&gt;isolated sandbox&lt;/a&gt; and further preventing unintended commands from impacting your server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private DM pairing&lt;/strong&gt;: Droplets configure &lt;a href="https://docs.openclaw.ai/start/pairing" rel="noopener noreferrer"&gt;Direct Message (DM) pairing&lt;/a&gt; by default, which prevents unauthorized individuals from being able to talk to your OpenClaw.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While deploying this way on a cloud server offers security benefits, OpenClaw is still quite new. Like many new tools, it might have architectural characteristics that have not been designed to work with additional security features yet. Therefore, with added security features, some of OpenClaw’s functionality may not function as perfectly as it was intended. For example, some skills might not work out-of-the-box and can require some additional manual set up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;OpenClaw is a powerful, self-hosted AI assistant that can execute tasks like managing calendars, browsing the web, and running terminal commands. It should not be run on your personal machine due to significant security risks associated with giving an AI agent high-level system access.&lt;/li&gt;
&lt;li&gt;Deploying OpenClaw on a DigitalOcean 1-Click Application or on the App Platform provides a safer environment through security features like authenticated communication, hardened firewall rules, non-root user execution, Docker container isolation, and private Direct Message (DM) pairing.&lt;/li&gt;
&lt;li&gt;OpenClaw is model-agnostic and supports various LLMs via Application Programming Interface (API) keys or local deployment, making it flexible for different use cases and preferences.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this tutorial, you will first deploy an OpenClaw instance onto DigitalOcean’s 1-Click Deploy OpenClaw. Then you will deploy an instance on the App Platform. If you only need to be able to deploy on the App Platform, skip ahead to that section below.&lt;/p&gt;

&lt;h2&gt;
  
  
  1-Click Application
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 — Creating an OpenClaw Droplet
&lt;/h3&gt;

&lt;p&gt;First, sign in to your DigitalOcean account and create a &lt;a href="https://cloud.digitalocean.com/droplets/new" rel="noopener noreferrer"&gt;Droplet&lt;/a&gt;. On the Create Droplets page in the DigitalOcean Control Panel, under &lt;code&gt;Region&lt;/code&gt;, select the region closest to you. Under &lt;code&gt;Choose an Image&lt;/code&gt;, select the &lt;code&gt;Marketplace&lt;/code&gt; tab.&lt;/p&gt;

&lt;p&gt;In the search bar, type &lt;code&gt;OpenClaw&lt;/code&gt; and select the OpenClaw image from the search results.&lt;/p&gt;

&lt;p&gt;Next, choose a Droplet plan. The Basic plan with at least 4GB of RAM (such as the &lt;code&gt;s-2vcpu-4gb&lt;/code&gt; size) is recommended for running OpenClaw effectively.&lt;/p&gt;

&lt;p&gt;Under &lt;code&gt;Authentication&lt;/code&gt;, select &lt;code&gt;SSH Key&lt;/code&gt; and add your SSH key if you haven’t already. If you need to create an SSH key, follow the instructions in &lt;a href="https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/" rel="noopener noreferrer"&gt;How to Add SSH Keys to New or Existing Droplets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, give your Droplet a hostname (such as &lt;code&gt;OpenClaw-server&lt;/code&gt;), and click &lt;code&gt;Create Droplet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, you can create an OpenClaw Droplet using the DigitalOcean API. To create a 4GB OpenClaw Droplet in the NYC3 region, use the following curl command. You’ll need to either save your &lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/" rel="noopener noreferrer"&gt;API access token&lt;/a&gt; to an environment variable or substitute it into the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST -H 'Content-Type: application/json' \
     -H 'Authorization: Bearer '$TOKEN'' -d \
    '{"name":"choose_a_name","region":"nyc3","size":"s-2vcpu-4gb","image":"openclaw"}' \
    "https://api.digitalocean.com/v2/droplets"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once your Droplet is created, it takes a few minutes to fully initialize. After initialization, you can SSH into your Droplet using the IPv4 address shown in your DigitalOcean dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@your_droplet_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;your_droplet_ip&lt;/code&gt; with your Droplet’s actual IP address.&lt;/p&gt;

&lt;p&gt;Once logged in, the OpenClaw installation will be ready to configure. The DigitalOcean 1-Click Deploy OpenClaw includes OpenClaw version 2026.1.24-1 pre-installed with all necessary dependencies.&lt;/p&gt;

&lt;p&gt;You will see a welcome message from OpenClaw. Under the Control UI &amp;amp; Gateway Access section, you will see a Dashboard URL. Note the Dashboard URL value. You will use it later to access the GUI in your browser.&lt;/p&gt;

&lt;p&gt;You will see a welcome message from OpenClaw. Under the &lt;code&gt;Control UI &amp;amp; Gateway Access&lt;/code&gt; section, you will see a &lt;code&gt;Dashboard URL&lt;/code&gt;. Note the Dashboard URL value. You will use it later to access the GUI in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ntl1pi8pdprid3766h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ntl1pi8pdprid3766h.png" alt="Droplet Dashboard URL" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within the terminal, choose &lt;code&gt;Anthropic&lt;/code&gt; as your AI Provider. If you have access to Gradient AI, you can select that option. OpenAI models will be available soon. Once you select your provider, provide the respective API/Secret key.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 — Using OpenClaw
&lt;/h3&gt;

&lt;p&gt;With the 1-Click Application, there are 2 ways to use OpenClaw: you can either use the Graphical User Interface (GUI) through your browser, or you can use the Text User Interface (TUI) through your terminal.&lt;/p&gt;

&lt;p&gt;After entering your API key, it may ask if you want to run pairing automation now. This pairing is to give you access to the UI dashboard (GUI). If you would like to use the GUI, you can type &lt;code&gt;yes&lt;/code&gt; and enter.&lt;/p&gt;

&lt;p&gt;It will then provide you with a url. Open a browser, and in the URL bar, paste the provided URL from OpenClaw. This will open an OpenClaw GUI directly in your browser using the Gateway token to authenticate you for additional security. You will then need to go back to the terminal, type &lt;code&gt;continue&lt;/code&gt;, and press enter to continue the automated pairing.&lt;/p&gt;

&lt;p&gt;In your browser, click refresh, and you will be directed to the default chat page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tdgpkrl1t6la8swgcc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tdgpkrl1t6la8swgcc1.png" alt="OpenClaw Chat Page" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can type a message and send, and OpenClaw will respond. For example, if you ask about what files it can see, it will tell you.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input
What files can you currently see on my computer?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
Here’s a list of the files and directories currently visible in the sandbox workspace:
.
├── AGENTS.md
├── BOOTSTRAP.md
├── HEARTBEAT.md
├── USER.md
└── skills
    ├── 1password
    │   ├── SKILL.md
    │   └── references
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the GUI, you can review the bot’s usage, add communication channels, schedule cron jobs, add skills, and manage all aspects of OpenClaw.&lt;/p&gt;

&lt;p&gt;To use the TUI, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/opt/openclaw-tui.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Depending on the version, the script may also be located at &lt;code&gt;/opt/clawdbot-tui.sh&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You’ve now successfully deployed OpenClaw on DigitalOcean and accessed it through a web browser. From here, you can explore additional OpenClaw capabilities, such as browsing the web, managing files, or executing terminal commands on your Droplet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Installing Skills with the 1-Click Application
&lt;/h3&gt;

&lt;p&gt;OpenClaw comes with over 50 skills automatically loaded in the skill registry. You can install skills in the GUI by navigating to the &lt;code&gt;Skills&lt;/code&gt; section in the browser dashboard. For example, to integrate with Google Calendar, search for calendar, and click on &lt;code&gt;Install&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpx52jlcjrjopoygilu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpx52jlcjrjopoygilu7.png" alt="OpenClaw Skills" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A large number of skills are available to perform a wide range of tasks including managing your files, automating web browsing, monitoring health and smart home technologies, and managing social media communication. Read through &lt;a href="https://www.digitalocean.com/resources/articles/what-is-moltbot" rel="noopener noreferrer"&gt;What is OpenClaw?&lt;/a&gt; for an overview of how OpenClaw works and what OpenClaw’s capabilities are.&lt;/p&gt;

&lt;h2&gt;
  
  
  App Platform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 - Creating the OpenClaw App
&lt;/h3&gt;

&lt;p&gt;Deploying through DigitalOcean’s App Platform follows a slightly different process. First, go to the &lt;a href="https://github.com/digitalocean-labs/openclaw-appplatform" rel="noopener noreferrer"&gt;OpenClaw App Platform repo&lt;/a&gt; and click on the &lt;code&gt;Deploy to Digital Ocean&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b7qaftpj0wd1q88jlch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b7qaftpj0wd1q88jlch.png" alt="Deploy OpenClaw to DigitalOcean" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sign in or Create an Account. Scroll down in the &lt;code&gt;Environment Variables&lt;/code&gt; section and click &lt;code&gt;Edit&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pu43iutgmfv7ipx583c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pu43iutgmfv7ipx583c.png" alt="Editing Environmental Variables" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add your &lt;a href="https://docs.digitalocean.com/products/gradient-ai-platform/how-to/use-serverless-inference/#keys" rel="noopener noreferrer"&gt;model access key&lt;/a&gt; to the &lt;code&gt;GRADIENT_API_KEY&lt;/code&gt; parameter. This will allow you to use your Gradient AI Serverless Inference account for the OpenClaw bot. Finally, click on &lt;code&gt;Create APP&lt;/code&gt;. It can take up to 5 minutes to finish building the app.&lt;/p&gt;

&lt;p&gt;After it has finished building, go to the &lt;code&gt;Console&lt;/code&gt; tab of the app, and confirm it is working by typing in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw gateway health --url ws://127.0.0.1:18789
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You now have a working OpenClaw App. If you would like to connect to it remotely, you will need to &lt;a href="https://docs.digitalocean.com/reference/doctl/how-to/install/" rel="noopener noreferrer"&gt;install and configure doctl&lt;/a&gt;, the official command line interface (CLI) for the DigitalOcean API. You will need to follow the instructions to create an API token, use the API token to grant account access to doctl, then use the &lt;a href="https://docs.digitalocean.com/reference/doctl/reference/apps/console/" rel="noopener noreferrer"&gt;doctl apps console&lt;/a&gt; command to initiate a console session for the app. This step is not completely necessary because you can just use the &lt;code&gt;Console&lt;/code&gt; in your application through the DigitalOcean website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Connecting OpenClaw to WhatsApp
&lt;/h3&gt;

&lt;p&gt;When you initiate the console session, you will be accessing it as the root user. So first, you need to change to the openclaw user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su openclaw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you need to change directories into the &lt;code&gt;home&lt;/code&gt; directory of the openclaw user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The &lt;code&gt;cd&lt;/code&gt; command without arguments changes to the current user’s home directory. Since you’re now the &lt;code&gt;openclaw&lt;/code&gt; user, this will navigate to &lt;code&gt;/home/openclaw&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To connect your OpenClaw application to WhatsApp, go to the &lt;code&gt;Console&lt;/code&gt; of your OpenClaw application enter the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw channels login --channel whatsapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the instructions to scan the QR code and connect with your bot through WhatsApp.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Installing Skills with the App Platform Application
&lt;/h3&gt;

&lt;p&gt;To install skills through the App Platform application, from the &lt;code&gt;openclaw&lt;/code&gt; user in the console, browse skills with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw skills
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find a skill that you would like to use and install it with the following command, replacing &lt;code&gt;skill_name&lt;/code&gt; with the name of the skill you would like to install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx clawhub install &amp;lt;skill_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You now have an OpenClaw App Platform application with a WhatsApp connection and skills. You can execute the &lt;code&gt;openclaw&lt;/code&gt; command to access the rest of openclaw’s features.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Can I use a model other than Claude?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, OpenClaw is designed to be model-agnostic, so it does support models other than Anthropic’s Claude. It allows users to use various Large Language Models (LLMs) via API keys or locally. However, note that using the DigitalOcean 1-Click Deploy OpenClaw as outlined above, most users will only be able to use Anthropic (support for OpenAI coming soon!).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I deploy it on other operating systems that are not Linux?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, you can deploy OpenClaw on &lt;a href="https://docs.openclaw.ai/platforms/windows" rel="noopener noreferrer"&gt;Windows&lt;/a&gt;, &lt;a href="https://docs.openclaw.ai/platforms/macos" rel="noopener noreferrer"&gt;macOS&lt;/a&gt;, and &lt;a href="https://docs.openclaw.ai/platforms/linux" rel="noopener noreferrer"&gt;Linux&lt;/a&gt;, and &lt;a href="https://docs.openclaw.ai/platforms" rel="noopener noreferrer"&gt;other platforms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the main security concerns?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The main security concerns are its high-level system access, potential for misconfiguration, and its ability to execute arbitrary code that might be harmful to your system. It’s important to be aware of the environment in which it’s deployed and the access it has.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I give API Key access to my Agents?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is possible to selectively give agents more control over the world around you. The default OpenClaw application will keep these keys together in an environment that is available to all agents. This configuration gives you control to inject the keys you want to the agents that should have those powers. On the “Agents” Menu bar, select the Agent you’d like to grant access (or “Defaults” for all), then under Sandbox &amp;gt; Docker &amp;gt; Env, add the selected API Keys that should be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does pricing work with OpenClaw?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OpenClaw is free and open-source to download and use, but you are paying for the LLM tokens. Therefore, the price depends on your usage. You should be careful with this because with scheduled jobs or other functionality, the costs can increase quickly and unexpectedly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you deployed OpenClaw on DigitalOcean, creating a secure environment for your personal AI assistant. By running OpenClaw on a cloud server instead of your local machine, you’ve significantly reduced security risks while maintaining the full functionality of this powerful tool.&lt;/p&gt;

&lt;p&gt;The DigitalOcean OpenClaw deployment provides critical security features out of the box—including authenticated communication, hardened firewall rules, Docker container isolation, and non-root user execution—that make it safer to experiment with AI agent capabilities. You accessed it through a web browser and can now execute various tasks through your preferred messaging apps. Next, try adding new skills to your OpenClaw instance and customize the app to best suit your agentic needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/moltbot-quickstart-guide" rel="noopener noreferrer"&gt;OpenClaw Quickstart Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-build-parallel-agentic-workflows-with-python" rel="noopener noreferrer"&gt;How to Build Parallel Agentic Workflows with Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/mistral-3-models" rel="noopener noreferrer"&gt;Mistral 3 Models on DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>learning</category>
      <category>digitalocean</category>
    </item>
    <item>
      <title>January 2026 DigitalOcean Tutorial Roundup: OpenClaw and LangSmith</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Thu, 05 Feb 2026 17:00:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/january-2026-digitalocean-tutorial-roundup-openclaw-and-langsmith-34c3</link>
      <guid>https://forem.com/digitalocean/january-2026-digitalocean-tutorial-roundup-openclaw-and-langsmith-34c3</guid>
      <description>&lt;p&gt;Regardless of where you are on your AI knowledge journey, the &lt;a href="https://www.digitalocean.com/community" rel="noopener noreferrer"&gt;DigitalOcean Community&lt;/a&gt; offers hundreds of tutorials you can explore and test in your own development environment. With so much new technical content published each month, we’re sharing regular roundups of the latest (and most interesting) AI and machine learning guides to help you stay current.&lt;/p&gt;

&lt;p&gt;The new year kicked off with some hot topics—like OpenClaw—alongside foundational refreshers such as advanced PyTorch. Here’s a look at 10 tutorials published in January 2026 to add to your weekend reading (and coding).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/conceptual-articles/moltbot-behind-the-scenes" rel="noopener noreferrer"&gt;How Moltbot Works Behind the Scenes&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Discover what makes the OpenClaw (formerly Moltbot) AI agent so effective and what it actually does under the hood. This overview walks through its architecture and the tools you can connect for personalized workflows and recommendations. You’ll also learn how it integrates with applications, what security risks to watch for, and how to launch it on DigitalOcean Droplets in just a few clicks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwozkpy3h68cf63p2zx7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwozkpy3h68cf63p2zx7e.png" alt="Moltbot Gateway Functionality diagram" width="611" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/practical-guide-to-advanced-pytorch" rel="noopener noreferrer"&gt;The Practical Guide to Advanced PyTorch&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Ready to level up your PyTorch skills? This hands-on guide goes beyond the basics, getting into performance tuning, advanced training patterns, and real-world scaling techniques. You’ll learn how to write cleaner, faster, and more efficient code while avoiding common bottlenecks.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/introduction-to-topic-modelling" rel="noopener noreferrer"&gt;Introduction to Topic Modeling in NLP&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Topic modeling helps you make sense of large volumes of text without manually labeling everything. This tutorial walks through the fundamentals of Latent Dirichlet Allocation (LDA) and shows how to uncover hidden themes across documents, tickets, or transcripts. With clear examples and visualizations, you’ll turn unstructured text into actionable insights.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw3mksursscgyw1zc8a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw3mksursscgyw1zc8a3.png" alt="Topic modeling classification graph" width="763" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/data-secure-ai-workflows" rel="noopener noreferrer"&gt;Create and Implement Data-Secure AI Workflows&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Security shouldn’t be an afterthought when working with AI and sensitive data. This article explains how to design workflows that protect user information and reduce risk across your entire pipeline. From access controls to safe model usage patterns, you’ll learn how to properly evaluate models and manage LLM workflow data securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/multi-head-attention-simple-explained" rel="noopener noreferrer"&gt;Multi-Head Attention Explained: Queries, Keys, and Values Made Simple&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Transformers power modern AI—but they can feel like a black box. This tutorial breaks down multi-head attention in plain language by explaining queries, keys, and values step by step. You’ll see how multiple heads capture different relationships in data and why that improves performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/flashattention-4-llm-inference-optimization" rel="noopener noreferrer"&gt;FlashAttention 4: Faster, Memory-Efficient Attention for LLMs&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Speed and memory efficiency are critical when running large models at scale. This deep dive into FlashAttention 4 explains how modern attention kernels reduce memory usage and improve inference times on GPUs. Learn what’s changed, when it’s worth adopting, and how it can cut costs while boosting performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/langsmith" rel="noopener noreferrer"&gt;LangSmith Explained: Debugging and Evaluating LLM Agents&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Building agents is fun—debugging them, not so much. This tutorial introduces LangSmith, a toolkit for tracing, testing, and evaluating LLM-powered applications with real observability. You’ll track every call, inspect outputs, and systematically measure quality so you can troubleshoot with confidence instead of guesswork.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-write-and-implement-agent-skills" rel="noopener noreferrer"&gt;How to Write and Implement Agent Skills&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As agents become more capable, their frameworks need to stay organized. This guide shows how to create modular “skills” that agents can dynamically load, keeping prompts lean and logic reusable. You’ll structure capabilities into clean, composable units that scale with your project—making agents easier to maintain and extend.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-create-data-for-fine-tuning-llms" rel="noopener noreferrer"&gt;How to Create Data for Fine-Tuning LLMs&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Fine-tuning success starts with high-quality data. This tutorial walks through collecting, cleaning, and formatting datasets so your model learns exactly what you want it to. From JSONL structures to balancing human and synthetic examples, it covers the small details that make a big difference in results.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/crewai-role-based-agent-orchestration" rel="noopener noreferrer"&gt;CrewAI: A Practical Guide to Role-Based Agent Orchestration&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What if your AI agents worked like a team instead of a single brain? This crash course introduces a role-based framework for organizing multiple agents into roles such as researcher, writer, or manager. You’ll build collaborative workflows where each agent has a defined responsibility—bringing structure, reliability, and scale to multi-agent systems.&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>learning</category>
      <category>ai</category>
      <category>resources</category>
    </item>
    <item>
      <title>DigitalOcean on Dev.to: Practical AI Insights for Builders</title>
      <dc:creator>Jess Lulka</dc:creator>
      <pubDate>Mon, 02 Feb 2026 17:25:23 +0000</pubDate>
      <link>https://forem.com/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c</link>
      <guid>https://forem.com/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c</guid>
      <description>&lt;p&gt;The gap between “I want to build something with AI” and actually shipping it is still huge—fragmented tooling, constant model churn, tutorials that assume you already know everything. So how do you figure out what n8n project to pursue or why your agent keeps hallucinating? Or even what tools are most useful for your AI projects, no matter the scale?     &lt;/p&gt;

&lt;p&gt;We're reinvigorating our Dev.to account for putting AI tutorials and technical deep dives in your feed, straight from our technical writing staff. Look forward to guides on topics like running DeepSeek R1 on a GPU Droplet, debugging LangGraph agents, and setting up vLLM for inference—plus video walkthroughs when reading isn't enough. &lt;/p&gt;

&lt;p&gt;At DigitalOcean, we're not just a cloud company anymore. With Gradient—our unified AI cloud—we're giving developers GPU infrastructure and an agent platform for running inference at scale. And we've learned a lot along the way that we want to share.&lt;/p&gt;

&lt;p&gt;This will include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Breakdowns of new models, benchmarks, and technical concept foundations.&lt;/li&gt;
&lt;li&gt;Tutorials on deploying AI tools, LLM fine-tuning and memory management, and AI agent creation. &lt;/li&gt;
&lt;li&gt;Step-by-step articles on model debugging, data control, reinforcement learning, and using DigitalOcean infrastructure.&lt;/li&gt;
&lt;li&gt;Videos on vLLM deployment, LangGraph agents, and n8n automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;

  &lt;iframe src="https://www.youtube.com/embed/Om5VMlYOHdw"&gt;
  &lt;/iframe&gt;


 How exactly do vibe coding tools work to create applications? Get a rundown of how to use Lovable and n8n to generate a Yuka-style app—and download the prompts to test it out for yourself.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you want to keep up with our hands-on tutorials, video walkthroughs, and DigitalOcean best practices, stick around; you might even catch a few discussions, contests, or memes in your feed. &lt;/p&gt;

</description>
      <category>ai</category>
      <category>digitalocean</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
