<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: DigitalOcean</title>
    <description>The latest articles on Forem by DigitalOcean (@digitalocean_staff).</description>
    <link>https://forem.com/digitalocean_staff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/digitalocean_staff"/>
    <language>en</language>
    <item>
      <title>Build an End-to-End RAG Pipeline for LLM Applications</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Wed, 01 Apr 2026 01:06:34 +0000</pubDate>
      <link>https://forem.com/digitalocean/build-an-end-to-end-rag-pipeline-for-llm-applications-1330</link>
      <guid>https://forem.com/digitalocean/build-an-end-to-end-rag-pipeline-for-llm-applications-1330</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally written by Shaoni Mukherjee (Technical Writer)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/resources/articles/large-language-models" rel="noopener noreferrer"&gt;Large language models&lt;/a&gt; have transformed the way we build intelligent applications. &lt;a href="https://www.digitalocean.com/products/gradient/platform" rel="noopener noreferrer"&gt;Generative AI Models&lt;/a&gt; can summarize documents, generate code, and answer complex questions. However, they still face a major limitation: they cannot access private or continuously changing knowledge unless that information is incorporated into their training data.&lt;/p&gt;

&lt;p&gt;Retrieval-Augmented Generation (RAG) addresses this limitation by combining information retrieval systems with generative AI models. Instead of relying entirely on the knowledge embedded in model weights, a RAG system retrieves relevant information from external sources and provides it to the language model during inference. The model then generates a response grounded in this retrieved context.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;end-to-end RAG pipeline&lt;/strong&gt; refers to the full system that manages this process from beginning to end. It includes ingesting documents, transforming them into embeddings, storing them in a vector database, retrieving relevant information for a user query, and generating an answer using a large language model.&lt;/p&gt;

&lt;p&gt;This architecture is increasingly used in modern AI systems such as enterprise knowledge assistants, internal documentation search engines, developer copilots, and AI customer support tools. Organizations adopt RAG because it allows models to remain lightweight while still accessing large knowledge bases that change frequently.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will walk through how to design and build a complete RAG pipeline. Along the way, we will explore architectural considerations, optimization strategies, and production challenges developers encounter when deploying retrieval-based AI systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmeku3hdzligtrv0nf06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmeku3hdzligtrv0nf06.png" alt="Knowledge and Vector Storage for RAG pipeline" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAG combines retrieval and generation for more accurate AI systems&lt;/strong&gt;: Retrieval-Augmented Generation (RAG) bridges the gap between static language models and dynamic, real-world data. Instead of relying only on pre-trained knowledge, it fetches relevant information at runtime and uses it to generate answers. This makes responses more accurate, up-to-date, and context-aware. It is especially useful for applications like chatbots, internal knowledge assistants, and search systems. Overall, RAG helps reduce hallucinations and improves trust in AI-generated outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector embeddings are the foundation of semantic search in RAG&lt;/strong&gt;: Embeddings convert text into numerical vectors that capture meaning rather than exact wording. This allows the system to understand similarity between queries and documents even if they use different phrasing. As a result, retrieval becomes more intelligent and context-driven instead of keyword-based. High-quality embedding models like &lt;code&gt;text-embedding-3-large&lt;/code&gt; or &lt;code&gt;bge-large-en&lt;/code&gt; can significantly improve retrieval performance. Choosing the right embedding model directly impacts the overall quality of your RAG system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Each component of the pipeline plays a critical role&lt;/strong&gt;: A RAG system is made up of multiple steps, including ingestion, chunking, embedding, storage, retrieval, and generation. If any one component is poorly optimized, it can affect the entire pipeline’s performance. For example, bad chunking can lead to irrelevant retrieval, even if your embedding model is strong. Similarly, weak retrieval will result in poor answers, no matter how powerful the language model is. This is why building an end-to-end RAG system requires careful design and tuning at every stage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluation is essential for building reliable RAG applications&lt;/strong&gt;: It is not enough to just build a RAG pipeline, but you must also evaluate how well it performs. This includes checking whether the system retrieves the correct documents and whether the generated answers are accurate and grounded. Metrics like precision and recall help measure retrieval quality, while human evaluation helps assess answer correctness. Creating benchmark datasets with known questions and answers makes it easier to track improvements over time. Continuous evaluation ensures your system remains reliable in production.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding the RAG System Architecture
&lt;/h2&gt;

&lt;p&gt;Before implementing the pipeline, it is important to understand how the different components interact. A typical &lt;strong&gt;RAG system architecture&lt;/strong&gt; can be divided into two major workflows: the indexing pipeline and the retrieval pipeline.&lt;/p&gt;

&lt;p&gt;The indexing pipeline prepares the knowledge base so that it can be searched efficiently. During this stage, documents are ingested, cleaned, split into chunks, converted into embeddings, and stored in a &lt;a href="https://www.digitalocean.com/community/tutorials/beyond-vector-databases-rag-without-embeddings" rel="noopener noreferrer"&gt;vector database&lt;/a&gt;. This process is usually executed offline or periodically when new data becomes available.&lt;/p&gt;

&lt;p&gt;The retrieval pipeline operates during inference. When a user asks a question, the system converts that query into an &lt;a href="https://www.digitalocean.com/community/tutorials/beyond-vector-databases-rag-without-embeddings" rel="noopener noreferrer"&gt;embedding&lt;/a&gt;, searches the vector database for semantically similar chunks, and provides those retrieved passages to the language model. The model then generates a response using both the query and the contextual information.&lt;/p&gt;

&lt;p&gt;A simplified representation of the pipeline looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Document Sources
       (PDFs, Docs, APIs, Knowledge Base)
                        |
                        v
               Document Processing
                        |
                        v
                  Text Chunking
                        |
                        v
               Embedding Generation
                        |
                        v
               Vector Database Index
                        |
                        v
User Query → Query Embedding → Similarity Search
                        |
                        v
             Retrieved Context Chunks
                        |
                        v
                  LLM Generation
                        |
                        v
                  Final Response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This architecture enables the system to retrieve information dynamically rather than relying solely on model training.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy49fm6102laxs8huvmqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy49fm6102laxs8huvmqn.png" alt="RAG System Architecture" width="750" height="676"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Ingestion in a RAG Pipeline
&lt;/h2&gt;

&lt;p&gt;The first stage of the pipeline involves gathering the data that the AI system will use as its knowledge source. In many real-world applications, this information is distributed across multiple systems. Organizations may store documentation in internal knowledge bases, PDFs, wikis, product manuals, or database records.&lt;/p&gt;

&lt;p&gt;The ingestion stage extracts textual information from these sources and prepares it for processing. Depending on the data format, ingestion may involve parsing HTML pages, converting PDFs to text, or querying APIs to retrieve structured records.&lt;/p&gt;

&lt;p&gt;At this stage, developers often implement preprocessing steps such as removing redundant formatting, normalizing whitespace, and filtering irrelevant sections. These steps are important because retrieval performance strongly depends on the quality of the text data stored in the system.&lt;/p&gt;

&lt;p&gt;For enterprise knowledge retrieval systems, ingestion pipelines are usually automated and scheduled. For example, an internal documentation chatbot might update its &lt;a href="https://docs.digitalocean.com/products/gradient-ai-platform/how-to/create-manage-agent-knowledge-bases/" rel="noopener noreferrer"&gt;knowledge base&lt;/a&gt; daily by ingesting the latest documentation changes from a repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Text Chunking: Preparing Documents for Retrieval
&lt;/h2&gt;

&lt;p&gt;After ingestion, documents must be divided into smaller pieces before they can be embedded. This step, known as &lt;a href="https://docs.digitalocean.com/products/gradient-ai-platform/concepts/chunking-strategies/" rel="noopener noreferrer"&gt;text chunking&lt;/a&gt;, plays a critical role in the overall performance of the RAG pipeline.&lt;/p&gt;

&lt;p&gt;Large documents cannot be embedded effectively because embedding models have token limits and because large chunks reduce retrieval precision. Instead, documents are broken into manageable segments that capture a coherent piece of information.&lt;/p&gt;

&lt;p&gt;Chunk size is typically chosen between 200 and 500 tokens. Smaller chunks provide more precise retrieval results, while larger chunks preserve more contextual information. Many production pipelines use overlapping chunks to prevent important sentences from being split across boundaries.&lt;/p&gt;

&lt;p&gt;The following diagram illustrates how a long document is transformed into multiple overlapping chunks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Original Document
-------------------------------------------------------
| Paragraph 1 | Paragraph 2 | Paragraph 3 | Paragraph 4 |
-------------------------------------------------------

After Chunking
-------------------------------------------------------
| Chunk 1 | Chunk 2 | Chunk 3 | Chunk 4 | Chunk 5 |
-------------------------------------------------------

Chunk Example
Chunk 1: Paragraph 1 + part of Paragraph 2
Chunk 2: Paragraph 2 + part of Paragraph 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choosing an effective chunking strategy significantly improves retrieval accuracy because each chunk represents a focused semantic concept.&lt;/p&gt;

&lt;h2&gt;
  
  
  Embedding Generation
&lt;/h2&gt;

&lt;p&gt;Once documents are divided into chunks, each chunk must be converted into a numerical representation called an embedding. Embeddings transform text into high-dimensional vectors that capture semantic meaning.&lt;/p&gt;

&lt;p&gt;For example, two sentences that express similar ideas will produce vectors that are close to each other in vector space. This property allows vector databases to retrieve semantically related text even when the wording differs.&lt;/p&gt;

&lt;p&gt;Embedding models are trained using large datasets and &lt;a href="https://www.digitalocean.com/community/tutorials/transformers-attention-is-all-you-need" rel="noopener noreferrer"&gt;transformer architectures&lt;/a&gt;. When a chunk is processed, the model generates a vector with hundreds or thousands of dimensions. These vectors serve as the foundation for similarity search.&lt;/p&gt;

&lt;p&gt;Embedding generation occurs during both indexing and retrieval. During indexing, embeddings are generated for each document chunk. During retrieval, the user’s query is also converted into an embedding so that it can be compared against stored vectors.&lt;/p&gt;

&lt;p&gt;This mechanism allows the RAG system to perform &lt;strong&gt;semantic search&lt;/strong&gt;, which is far more powerful than traditional keyword matching.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vector Embedding
&lt;/h2&gt;

&lt;p&gt;Vector embeddings are dense numerical representations of data, which can be text, images, or audio. Vector embeddings are used to capture the semantic meaning of the data in a high-dimensional vector space. In an end-to-end RAG pipeline, embeddings are used to convert both documents and user queries into vectors so that similarity between them can be measured using metrics like cosine similarity. This allows the system to retrieve context based on meaning rather than exact keyword matches, making responses more accurate and relevant.&lt;/p&gt;

&lt;p&gt;For example, even if a query doesn’t contain the same words as a document, embeddings can still identify it as relevant if the underlying intent is similar. Popular embedding models used in RAG systems include &lt;a href="https://developers.openai.com/api/docs/models/text-embedding-3-large" rel="noopener noreferrer"&gt;text-embedding-3-large&lt;/a&gt;, &lt;a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" rel="noopener noreferrer"&gt;all-MiniLM-L6-v2&lt;/a&gt;, &lt;a href="https://huggingface.co/BAAI/bge-large-en" rel="noopener noreferrer"&gt;bge-large-en&lt;/a&gt;, and &lt;a href="https://huggingface.co/intfloat/e5-large-v2" rel="noopener noreferrer"&gt;e5-large-v2&lt;/a&gt;, each offering different trade-offs in performance, cost, and deployment flexibility.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixgailx5konq18wkv1ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixgailx5konq18wkv1ev.png" alt="Vector Embedding Workflow" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing Vectors in a Database
&lt;/h2&gt;

&lt;p&gt;After embeddings are created, they must be stored in a specialized database capable of performing fast similarity searches. These systems are known as &lt;strong&gt;vector databases&lt;/strong&gt; and form the core of the RAG retrieval infrastructure.&lt;/p&gt;

&lt;p&gt;Unlike traditional databases that index numeric or textual fields, vector databases are optimized to search across high-dimensional vectors. They use approximate nearest neighbor algorithms to identify vectors that are closest to a query embedding.&lt;/p&gt;

&lt;p&gt;The structure of a stored vector typically includes the embedding itself, the original text chunk, and metadata describing the source of the information. Metadata can include document identifiers, timestamps, or categories that allow filtering during retrieval.&lt;/p&gt;

&lt;p&gt;A simplified representation of vector storage looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Vector Database

ID     Vector Embedding        Text Chunk
---------------------------------------------------------
1   [0.12, -0.44, 0.92...]   "RAG combines retrieval..."
2   [0.55, 0.33, -0.14...]   "Vector databases enable..."
3   [-0.77, 0.08, 0.62...]   "Embeddings represent..."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Popular vector database technologies include managed services and open-source platforms designed specifically for AI workloads. The choice often depends on scale, infrastructure preferences, and latency requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieval in a RAG Pipeline
&lt;/h2&gt;

&lt;p&gt;When a user submits a question, the system begins the retrieval stage. The query is first converted into an embedding using the same embedding model used during indexing. Maintaining the same embedding model is important because similarity comparisons rely on consistent vector representations.&lt;/p&gt;

&lt;p&gt;The query embedding is then sent to the vector database. The database performs a similarity search to find document chunks whose embeddings are closest to the query vector. These chunks represent the pieces of information most relevant to the user’s question.&lt;/p&gt;

&lt;p&gt;The retrieved chunks are then combined and passed to the language model as contextual input. The model uses this context to generate a response grounded in actual documents rather than relying solely on its training data.&lt;/p&gt;

&lt;p&gt;This process ensures that answers are based on real knowledge sources and can be updated whenever the underlying documents change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generation with a Large Language Model
&lt;/h2&gt;

&lt;p&gt;The final stage of the pipeline involves generating a response using a language model. At this point, the system already has two pieces of information: the user’s question and the retrieved context.&lt;/p&gt;

&lt;p&gt;These elements are combined into a prompt that instructs the model to answer the question using the provided information. Because the context is derived from authoritative documents, the model’s output becomes significantly more reliable and factual.&lt;/p&gt;

&lt;p&gt;This stage also allows developers to control how responses are generated. Prompts may instruct the model to summarize information, provide citations, or answer in a specific format. Some systems also include guardrails that prevent hallucinations or restrict responses to retrieved information.&lt;/p&gt;

&lt;p&gt;For example, if a user asks a question, the system first pulls the most relevant text from your knowledge base, then the LLM rewrites that content into a helpful answer, making it more conversational, structured, and easy to understand. This step is what makes RAG powerful, because it combines &lt;strong&gt;accurate, up-to-date information&lt;/strong&gt; with &lt;strong&gt;fluent natural language generation&lt;/strong&gt;, reducing hallucinations and improving answer quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Demo: Building a Simple End-to-End RAG Pipeline
&lt;/h2&gt;

&lt;p&gt;The following example demonstrates how a basic &lt;strong&gt;RAG pipeline for LLM applications&lt;/strong&gt; can be implemented in Python. The example uses document loading, chunking, embeddings, and a vector database to create a minimal working pipeline.&lt;/p&gt;

&lt;h4&gt;
  
  
  Install dependencies
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain chromadb sentence-transformers openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Load documents
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import TextLoader

loader = TextLoader("knowledge_base.txt")
documents = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Split documents into chunks
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import RecursiveCharacterTextSplitter

splitter = RecursiveCharacterTextSplitter(
   chunk_size=500,
   chunk_overlap=100
)

chunks = splitter.split_documents(documents)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Generate embeddings
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(
   model_name="sentence-transformers/all-MiniLM-L6-v2"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Store vectors
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.vectorstores import Chroma

vector_db = Chroma.from_documents(
   documents=chunks,
   embedding=embeddings
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Retrieval and generation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI()

qa_chain = RetrievalQA.from_chain_type(
   llm=llm,
   retriever=vector_db.as_retriever()
)

response = qa_chain.run(
   "What is retrieval augmented generation?"
)

print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple implementation demonstrates how document retrieval and language models can be combined into a working RAG system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluating RAG System Performance
&lt;/h2&gt;

&lt;p&gt;Evaluating a RAG system is important because you need to be sure that it is not only retrieving the right information but also generating correct and useful answers from it. In simple terms, a good RAG pipeline should &lt;strong&gt;find the right content&lt;/strong&gt; and then &lt;strong&gt;explain it correctly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;First, let’s look at &lt;strong&gt;retrieval evaluation&lt;/strong&gt;. This checks whether the system is pulling the right documents from your database. Imagine you have a knowledge base about cloud services, and a user asks, &lt;em&gt;“How can I run AI models on GPUs?”&lt;/em&gt;. If your system retrieves documents about &lt;a href="https://www.digitalocean.com/products/gradient/gpu-droplets" rel="noopener noreferrer"&gt;GPU Droplets&lt;/a&gt; or AI infrastructure, that’s a good sign. But if it returns unrelated content like pricing pages or networking docs, retrieval quality is poor. Metrics like &lt;em&gt;recall&lt;/em&gt; (did we find all relevant documents?) and &lt;em&gt;precision&lt;/em&gt; (were the retrieved documents actually relevant?) help measure this. For example, if 5 documents are relevant but your system only retrieves 2, recall is low.&lt;/p&gt;

&lt;p&gt;Next is &lt;strong&gt;generation evaluation&lt;/strong&gt;, which focuses on the answer produced by the language model. Even if retrieval is correct, the model (like GPT-4 or Llama 3) might still generate incomplete or incorrect responses. For instance, if the retrieved document clearly says &lt;em&gt;“GPU droplets support CUDA workloads”&lt;/em&gt;, but the model responds with &lt;em&gt;“GPU support is limited”&lt;/em&gt;, that’s a problem. This is why human evaluation is often needed to check if the answer is &lt;strong&gt;factually correct, complete, and grounded in the provided context&lt;/strong&gt;. Automated metrics struggle to detect things like s or subtle inaccuracies.&lt;/p&gt;

&lt;p&gt;To make evaluation consistent, teams usually create an &lt;strong&gt;evaluation dataset&lt;/strong&gt;. This is a collection of sample questions along with their correct answers and sometimes the expected source documents. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Question: &lt;em&gt;“What are GPU droplets used for?”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Expected answer: &lt;em&gt;“They are used for AI/ML workloads, training models, and high-performance computing.”&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can then run your RAG system on this dataset and compare its answers against the expected ones. Over time, this helps you track improvements, catch errors, and tune your system (for example, by improving chunking, choosing a better embedding model, or adjusting prompts).&lt;/p&gt;

&lt;p&gt;In practice, strong RAG evaluation combines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval checks&lt;/strong&gt;: Did we fetch the right information?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Answer checks&lt;/strong&gt;: Did we explain it correctly?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous testing&lt;/strong&gt;: Are we improving over time?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures your RAG pipeline is reliable, accurate, and ready for real-world use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling and Production Considerations
&lt;/h2&gt;

&lt;p&gt;Prototype RAG pipelines often work well with small datasets, but production deployments introduce additional challenges. Large organizations may store millions of document chunks, requiring scalable infrastructure for indexing and retrieval.&lt;/p&gt;

&lt;p&gt;Latency also becomes an important concern. Vector searches, embedding generation, and LLM inference all contribute to response time. Developers must carefully optimize these components to ensure interactive performance.&lt;/p&gt;

&lt;p&gt;Production systems frequently incorporate caching layers, query batching, and efficient indexing strategies. Monitoring tools are also used to track retrieval accuracy, system latency, and cost per query.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost and Latency Optimization
&lt;/h2&gt;

&lt;p&gt;Operating a &lt;a href="https://www.digitalocean.com/community/conceptual-articles/rag-ai-agents-agentic-rag-comparative-analysis" rel="noopener noreferrer"&gt;RAG pipeline&lt;/a&gt; at scale can become expensive if not carefully optimized. Each query may require embedding generation, vector search, and language model inference.&lt;/p&gt;

&lt;p&gt;Several strategies help reduce these costs. Caching responses for frequently asked questions prevents repeated model inference. Limiting the number of retrieved chunks also reduces token usage and speeds up generation.&lt;/p&gt;

&lt;p&gt;Another important technique is &lt;strong&gt;re-ranking&lt;/strong&gt;. Instead of sending many retrieved documents to the language model, a re-ranking model selects the most relevant passages before generation. This improves response quality while reducing computational overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  RAG vs Fine-Tuning
&lt;/h2&gt;

&lt;p&gt;A common question among developers is whether to use retrieval-augmented generation or fine-tuning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/fine-tuning-llms-on-budget-digitalocean-gpu" rel="noopener noreferrer"&gt;Fine-tuning&lt;/a&gt; changes a model’s internal weights by training it on additional datasets. This approach works well for teaching models specific styles or behaviors. However, it is less effective for continuously changing knowledge because retraining the model is expensive and time-consuming.&lt;/p&gt;

&lt;p&gt;RAG systems take a different approach by keeping the model unchanged while retrieving knowledge dynamically. This makes them ideal for applications where information changes frequently, such as product documentation or customer support knowledge bases.&lt;/p&gt;

&lt;p&gt;For most knowledge-intensive applications, RAG provides a more flexible and maintainable solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building an end-to-end RAG pipeline is about combining the strengths of retrieval systems and large language models to create applications that are both accurate and context-aware. Instead of relying only on pre-trained knowledge, a RAG system can fetch relevant information in real time and use models like GPT-4 or Llama 3 to generate clear, human-like responses grounded in that data. In this article, we understood each of the steps used to create the RAG pipeline from data ingestion and chunking to vector embeddings, retrieval, and response generation. Each component plays a critical role, and even small improvements (like better chunking strategies or choosing the right embedding model) can significantly impact overall performance. As organizations continue to build AI-powered applications, RAG stands out as a practical and scalable approach for use cases like chatbots, knowledge assistants, and document search. By continuously evaluating and refining your pipeline, you can create systems that are not only intelligent but also reliable and production-ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/resources/articles/rag" rel="noopener noreferrer"&gt;What is Retrieval Augmented Generation (RAG)? The Key to Smarter, More Accurate AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/conceptual-articles/rag-ai-agents-agentic-rag-comparative-analysis" rel="noopener noreferrer"&gt;RAG, AI Agents, and Agentic RAG: An In-Depth Review and Comparative Analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/beyond-vectors-knowledge-graphs-and-rag" rel="noopener noreferrer"&gt;Beyond Vectors - Knowledge Graphs &amp;amp; RAG Using Gradient&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.langchain.com/" rel="noopener noreferrer"&gt;Langchain docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rag</category>
      <category>tutorial</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Tutorial: Deploy NVIDIA's NemoClaw in One Click</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Mon, 23 Mar 2026 18:28:14 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-set-up-nemoclaw-on-a-digitalocean-droplet-with-1-click-1lo4</link>
      <guid>https://forem.com/digitalocean/how-to-set-up-nemoclaw-on-a-digitalocean-droplet-with-1-click-1lo4</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally written by Amit Jotwani (Staff Developer Advocate at DigitalOcean)&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Takeaways
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;NemoClaw is an open-source stack from NVIDIA designed to help developers run OpenClaw securely. &lt;/li&gt;
&lt;li&gt;DigitalOcean offers NemoClaw 1-Click Droplets that enable you to set up this stack on a CPU-optimized virtual machine and run NemoClaw. &lt;/li&gt;
&lt;li&gt;This tutorial illustrates how to SSH into your Droplet, configure inference settings and policies, connect to NemoClaw, and effectively reconnect after the initial setup.
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;At GTC 2026, NVIDIA announced &lt;a href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw" rel="noopener noreferrer"&gt;NemoClaw&lt;/a&gt;, an open-source stack that makes it easy to run &lt;a href="https://openclaw.com/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; autonomous agents securely. OpenClaw is an open-source agent platform that Jensen Huang called “the operating system for personal AI.” We covered &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-run-openclaw" rel="noopener noreferrer"&gt;how to run OpenClaw on a Droplet&lt;/a&gt; in an earlier tutorial. NemoClaw takes a different approach — it wraps OpenClaw with sandboxing, security policies, and inference routing through NVIDIA’s cloud.&lt;/p&gt;

&lt;p&gt;NemoClaw is still in alpha, so expect rough edges. Interfaces may change, features might be incomplete, and things could break. But if you’re curious to try it out or just want to see what NVIDIA’s vision for agents looks like, this tutorial will get you up and running on a DigitalOcean Droplet in under 10 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, you’ll need:&lt;/p&gt;

&lt;p&gt;A DigitalOcean account (&lt;a href="https://cloud.digitalocean.com/registrations/new" rel="noopener noreferrer"&gt;sign up here&lt;/a&gt; if you don’t have one)&lt;br&gt;
An NVIDIA account to generate an API key at &lt;a href="https://build.nvidia.com/settings/api-keys" rel="noopener noreferrer"&gt;build.nvidia.com&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1 - Create a Droplet from the Marketplace
&lt;/h2&gt;

&lt;p&gt;Head to the NemoClaw 1-Click Droplet on the DigitalOcean Marketplace. Click &lt;strong&gt;Create NemoClaw Droplet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When configuring the Droplet, select the &lt;strong&gt;CPU-Optimized&lt;/strong&gt; plan with &lt;strong&gt;Premium Intel&lt;/strong&gt;. You’ll want the option with &lt;strong&gt;32 GB of RAM and 16 CPUs&lt;/strong&gt;. NemoClaw runs Docker containers, a Kubernetes cluster (k3s), and the OpenShell gateway, so it needs the headroom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf3xcfukamdj8d0kidh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf3xcfukamdj8d0kidh1.png" alt="Droplet Configuration Settings" width="800" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pick a data center region near you, add your SSH key, and hit &lt;strong&gt;Create Droplet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Heads up: This Droplet costs $336/mo, so make sure to destroy it when you’re done experimenting. It adds up fast if you forget about it.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2 - SSH into the Droplet
&lt;/h2&gt;

&lt;p&gt;Once your Droplet is ready, SSH in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ssh"&gt;&lt;code&gt;&lt;span class="k"&gt;ssh&lt;/span&gt; root@your_server_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see the usual Ubuntu login banner, and then the NemoClaw onboarding wizard will kick off automatically. It runs through a series of preflight checks, making sure Docker is running, installing the OpenShell CLI, and spinning up the gateway. You’ll see checkmarks fly by as each step completes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9zq2u6f7fiedqcrj91w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9zq2u6f7fiedqcrj91w.png" alt="Onboarding checks" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 - Walk Through the Onboard Wizard
&lt;/h2&gt;

&lt;p&gt;The onboarding wizard will ask you a few things. Here’s what to do at each prompt:&lt;/p&gt;

&lt;h3&gt;
  
  
  Sandbox Name
&lt;/h3&gt;

&lt;p&gt;The first prompt asks for a sandbox name. Just press &lt;strong&gt;Enter&lt;/strong&gt; to accept the default (&lt;code&gt;my-assistant&lt;/code&gt;). The wizard will then create the sandbox, build the container image, and push it to the gateway. This takes a couple of minutes, and you’ll see it run through about 20 steps as it builds and uploads everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  NVIDIA API Key
&lt;/h3&gt;

&lt;p&gt;Once the sandbox is ready, the wizard asks for your NVIDIA API key. In this setup, inference is routed through NVIDIA’s cloud using the &lt;code&gt;nvidia/nemotron-3-super-120b-a12b&lt;/code&gt; model, so it needs a key to authenticate.&lt;/p&gt;

&lt;p&gt;To get your key, head to &lt;a href="https://build.nvidia.com/settings/api-keys" rel="noopener noreferrer"&gt;build.nvidia.com/settings/api-keys&lt;/a&gt;, sign in, and click &lt;strong&gt;Generate API Key&lt;/strong&gt;. Give it a name, pick an expiration, and hit &lt;strong&gt;Generate Key&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffkfetz0bbqstz3ea9a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffkfetz0bbqstz3ea9a3.png" alt="NVIDIA API Key generation" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the key (it starts with &lt;code&gt;nvapi-&lt;/code&gt;), paste it into the terminal prompt, and press &lt;strong&gt;Enter&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcisdgrdv3g5qk78pn0ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcisdgrdv3g5qk78pn0ti.png" alt="NVIDIA API key integration" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The wizard saves the key to &lt;code&gt;~/.nemoclaw/credentials.json&lt;/code&gt; and sets up the inference provider. You’ll see it confirm the model and create an inference route.&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy Presets
&lt;/h3&gt;

&lt;p&gt;After the inference setup, NemoClaw sets up OpenClaw inside the sandbox and then asks about policy presets. You’ll see a list of available presets including Discord, Docker Hub, Hugging Face, Jira, npm, PyPI, Slack, and more. These control what external services the agent is allowed to reach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzr3abqzhmec2dawimv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzr3abqzhmec2dawimv2.png" alt="Onboarding policy presets" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the bottom, the wizard asks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply suggested presets (pypi, npm)? [Y/n/list]:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Type &lt;code&gt;n&lt;/code&gt; and press &lt;strong&gt;Enter&lt;/strong&gt;. These presets grant the sandbox network access to package registries, which you don’t need for a basic setup. You can always add them later if your agent needs to install packages.&lt;/p&gt;

&lt;p&gt;Once onboarding finishes, you’ll see a clean summary with your sandbox details and the commands you’ll need going forward:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxv3xi2k87w2wyolgqfku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxv3xi2k87w2wyolgqfku.png" alt="Onboarding complete" width="800" height="530"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sandbox    my-assistant (Landlock + seccomp + netns)
Model      nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
NIM        not running

Run:       nemoclaw my-assistant connect
Status:    nemoclaw my-assistant status
Logs:      nemoclaw my-assistant logs --follow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4 - Connect to NemoClaw
&lt;/h2&gt;

&lt;p&gt;Now for the fun part. Connect to your sandbox.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nemoclaw my-assistant connect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This drops you into a shell inside the sandboxed environment. From here, launch the OpenClaw TUI (terminal user interface):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw tui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. You should see the OpenClaw chat interface come up. The agent will greet you and introduce itself, ready to chat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc2n1gyftn9k6eibpy34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc2n1gyftn9k6eibpy34.png" alt="OpenClaw TUI" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Type a message and hit &lt;strong&gt;Enter&lt;/strong&gt;. You’re now talking to an AI agent running inside a secure, sandboxed environment on your own Droplet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reconnecting After a New SSH Session
&lt;/h2&gt;

&lt;p&gt;If you close your terminal and SSH back into the Droplet later, you’ll find that &lt;code&gt;nemoclaw&lt;/code&gt; and related commands aren’t available. That’s because the onboarding script installed everything through nvm in a separate shell, and that doesn’t carry over to new sessions.&lt;/p&gt;

&lt;p&gt;Run this once to fix it permanently. It adds nvm to your &lt;code&gt;.bashrc&lt;/code&gt; so it loads automatically on every login:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export NVM_DIR="$HOME/.nvm"'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'[ -s "$NVM_DIR/nvm.sh" ] &amp;amp;&amp;amp; \. "$NVM_DIR/nvm.sh"'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'[ -s "$NVM_DIR/bash_completion" ] &amp;amp;&amp;amp; \. "$NVM_DIR/bash_completion"'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reconnect to your sandbox and launch the TUI the same way as before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nemoclaw my-assistant connect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw tui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v53w5esybr80ypsbwtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v53w5esybr80ypsbwtt.png" alt="Sandbox reload" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything picks up right where you left off. Your sandbox and agent are still running.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;By default, the sandbox has limited network access, so the agent can’t reach external services out of the box. To unlock more capabilities - like connecting to Slack, GitHub, or pulling packages from PyPI - you’ll want to configure policy presets. Check the NemoClaw documentation for the full list of available integrations and how to set them up.&lt;/p&gt;

&lt;p&gt;NemoClaw is still very early, so expect things to be rough around the edges. But if you want to get a feel for where always-on agents are headed, this is a good way to start poking around.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://marketplace.digitalocean.com/apps/nemoclaw-alpha" rel="noopener noreferrer"&gt;NemoClaw 1-Click Droplet on DigitalOcean Marketplace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/NemoClaw/" rel="noopener noreferrer"&gt;NemoClaw GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nvidia.com/nemoclaw/latest/" rel="noopener noreferrer"&gt;NemoClaw Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw" rel="noopener noreferrer"&gt;NVIDIA NemoClaw Announcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openclaw.com/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-run-openclaw" rel="noopener noreferrer"&gt;How to Run OpenClaw on a DigitalOcean Droplet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://build.nvidia.com/settings/api-keys" rel="noopener noreferrer"&gt;NVIDIA API Keys&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tutorial</category>
      <category>nemoclaw</category>
      <category>ai</category>
      <category>nvidia</category>
    </item>
    <item>
      <title>GPT 5.3 Codex is the Next Level for Agentic Coding</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Thu, 19 Mar 2026 20:00:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/gpt-53-codex-is-the-next-level-for-agentic-coding-52kl</link>
      <guid>https://forem.com/digitalocean/gpt-53-codex-is-the-next-level-for-agentic-coding-52kl</guid>
      <description>&lt;p&gt;Agentic Coding models are one of the obvious and most impressive applications of LLM technologies, and their development has gone hand in hand with massive impacts to markets and job growth. There are numerous players vying to create the best new LLM for all sorts of applications, and many would argue no company and their products in this space have more of a significant impact than OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;GPT‑5.3‑Codex&lt;/a&gt; is a truly impressive installment in this quest to create the best model. &lt;a href="https://openai.com" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; promises that GPT-5.3-Codex is their most &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;capable Codex model&lt;/a&gt; yet, advancing both coding performance and professional reasoning beyond GPT-5.2-Codex. Benchmark results show state-of-the-art performance on coding and agentic benchmarks like SWE-Bench Pro and Terminal-Bench, reflecting stronger multi-language and real-world task ability. Furthermore, the model is ~25% faster than &lt;a href="https://openai.com/index/introducing-gpt-5-2-codex/" rel="noopener noreferrer"&gt;GPT-5.2-Codex&lt;/a&gt; for &lt;a href="https://openai.com/codex/" rel="noopener noreferrer"&gt;Codex&lt;/a&gt; users thanks to infrastructure and inference improvements. Overall, GPT‑5.3‑Codex might be the most powerful agentic coding model ever released (&lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;So let’s see what it can do. Now available on the &lt;a href="https://www.digitalocean.com/products/gradient/platform" rel="noopener noreferrer"&gt;DigitalOcean GradientTM AI Platform&lt;/a&gt; and all OpenAI ChatGPT and Codex resources, we can test the model to see how it performs. In this tutorial, we will show how to use Codex to write a completely new project from scratch. We are going to make a &lt;a href="https://huggingface.co/Tongyi-MAI/Z-Image-Turbo" rel="noopener noreferrer"&gt;Z-Image-Turbo&lt;/a&gt; Real-Time image-to-image application using GPT‑5.3‑Codex, without any user coding! Follow along to learn what GPT‑5.3‑Codex has to offer, how to use GPT‑5.3‑Codex for yourself, and a guide to vibe coding new web applications from scratch!&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;State-of-the-Art Agentic Performance: GPT-5.3-Codex delivers impressive results across software engineering and agentic tasks, outperforming GPT-5.2-Codex in reasoning, multi-language capability, and real-world coding evaluations like SWE-Bench Pro and Terminal-Bench 2.0.&lt;/li&gt;
&lt;li&gt;Getting Started with GPT-5.3-Codex on GradientTM AI Platform is easy: All you need is access to the DigitalOcean Platform to begin integrating your LLM’s calls seamlessly into your workflows at scale.&lt;/li&gt;
&lt;li&gt;From Prototype to Production in Record Time: With roughly 25% improved speed and real-time interactive steering, GPT-5.3-Codex feels less like a static generator and more like a responsive engineering partner capable of iterating, debugging, and refining projects alongside you. By handling scaffolding, architecture decisions, edge cases, and deployment-ready details, GPT-5.3-Codex can dramatically compress development timelines, making it possible to ship fully functional applications from scratch more quickly than ever (&lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GPT‑5.3‑Codex Overview
&lt;/h2&gt;

&lt;p&gt;GPT-5.3-Codex is a major agentic coding model upgrade that combines stronger reasoning and professional knowledge with enhanced coding performance, runs about 25 % faster than GPT-5.2-Codex, and excels on real-world and multi-language benchmarks like &lt;a href="https://scale.com/leaderboard/swe_bench_pro_public" rel="noopener noreferrer"&gt;SWE-Bench Pro&lt;/a&gt; and &lt;a href="https://www.tbench.ai/" rel="noopener noreferrer"&gt;Terminal-Bench&lt;/a&gt;. It’s designed to go beyond simple code generation to support full software lifecycle tasks (e.g., debugging, deployment, documentation) and lets you interact and steer it in real time while it’s working, making it feel more like a collaborative partner than a generator. It also has expanded capabilities for long-running work and improved responsiveness, with broader availability across IDEs, CLI, and apps for paid plans. (&lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6s3njnozmwe93mtdvfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6s3njnozmwe93mtdvfg.png" alt="image" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see from the table above, GPT‑5.3‑Codex is a major step forward over GPT‑5.2‑Codex across software engineering, agentic, and computer use benchmarks. This, paired with the marked improvement in efficiency, make for an incredible indicator of how great this model is. We think this is a significant upgrade to previous GPT Codex model users, as well as new users looking for a powerful agentic coding tool to aid their process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with GPT-5.3-Codex
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh22frckrami4z84ep59l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh22frckrami4z84ep59l.png" alt="image" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two ways to get started with GPT-5.3-Codex that we recommend to developers. First, is accessing the model with Serverless Inference through the &lt;a href="https://www.digitalocean.com/products/gradient/platform" rel="noopener noreferrer"&gt;GradientTM AI Platform&lt;/a&gt;. With Serverless Inference, we can Pythonically integrate the LLM generations into any pipeline. All you need to do is create a model access key, and begin generating! For more information on getting started, check out the official &lt;a href="https://docs.digitalocean.com/products/gradient-ai-platform/how-to/use-serverless-inference/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffurv5tcadtlwz8jloy21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffurv5tcadtlwz8jloy21.png" alt="image" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other way to get started quickly is the official OpenAI Codex application. It’s easy to get started with Codex on your local machine. Simply download the application onto your computer, and launch it. You will then be prompted to log in to your account. From there, simply choose which project you wish to work in, and you’re ready to get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding a Z-Image-Turbo Web Application with GPT‑5.3‑Codex
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevd2jw8py8w20fzi25x1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevd2jw8py8w20fzi25x1.gif" alt="image" width="560" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So now that we have heard about how GPT‑5.3‑Codex performs, let’s see it in action. For this experiment, we sought to see how the model performed on a relatively novel assignment that has a basis in past applications. In this case, we asked it to create a real-time image-to-image pipeline for Z-Image-Turbo that uses webcam footage as image input.&lt;/p&gt;

&lt;p&gt;To do this, we created a blank new directory/project space to work in. We then asked the model to create a skeleton of the project to begin, and then iteratively added in the missing features on subsequent queries. Overall, we were able to create a full working version of the application with just 5 prompts and 30 minutes of testing. This extreme speed made it possible to ship the project in less than a day, from inspiration to completion. Now let’s take a closer look at the application project itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau60yz6xtsq15q936e6e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau60yz6xtsq15q936e6e.png" alt="image" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This project, which can be found &lt;a href="https://github.com/Jameshskelton/z-image-turbo-realtime" rel="noopener noreferrer"&gt;here&lt;/a&gt;, is a real-time webcam-driven image-to-image generation application built in Python around a &lt;a href="https://www.gradio.app/" rel="noopener noreferrer"&gt;Gradio&lt;/a&gt; interface and a dedicated Z-Image-Turbo inference engine, where the UI in app.py presents side-by-side live input and generated output panes, parameter controls, and explicit Start/Stop gating so inference only runs when requested, while the backend in inference.py loads Tongyi-MAI/Z-Image-Turbo via ZImageImg2ImgPipeline, introspects the pipeline signature to bind the correct image-conditioning argument, enforces true img2img semantics instead of prompt-only generation, and executes inference in torch.inference_mode() with dynamic argument wiring so behavior adapts to the installed diffusers API. Critically, it can compute per-frame target resolution from webcam aspect ratio, snapping dimensions to a model-friendly multiple (default 16), and caps both sides below 1024, then applies post-generation safeguards that made the app stable in practice: dtype strategy (auto preferring bf16 then fp32, avoiding fp16 black-frame failure modes), degenerate-output detection with automatic float32 recovery, robust PIL/NumPy/Tensor output decoding and normalization, effective-strength clamping to preserve source structure, frame-hash seed mixing so scene changes influence results, and configurable structure-preserving input blending, all parameterized in config.py and documented in the &lt;a href="https://github.com/Jameshskelton/z-image-turbo-realtime?tab=readme-ov-file#readme" rel="noopener noreferrer"&gt;README.md&lt;/a&gt;, with runtime status reporting latency plus internal diagnostics (pipe, dtype, size, effective strength, blend, seed, warnings) so you can observe exactly how each frame is being processed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;GPT-5.3-Codex feels less like an incremental update and more like a meaningful shift in how developers interact with code. The combination of stronger reasoning,  benchmark gains seen in testing, and a noticeable speed improvement makes it clear that agentic coding is maturing into something even more production-ready. What once required hours of boilerplate, debugging, and manual wiring can now be orchestrated through iterative prompts and high-level direction. As we demonstrated with the Z-Image-Turbo real-time application, a fully functional project can move from blank directory to working prototype in much less  time traditionally required. While the actual results and performance benefits you experience will vary based on specific project requirements, complexity, and individual developer workflows, we are confident that GPT-5.3-Codex provides a substantial upgrade and a meaningful step forward in agentic coding capability, as evidenced by its stronger reasoning and measurable benchmark gains.&lt;/p&gt;

&lt;p&gt;We recommend trying out GPT-5.3-Codex in all contexts, especially with &lt;a href="https://www.digitalocean.com/products/gradient/platform" rel="noopener noreferrer"&gt;DigitalOcean’s GradientTM AI Platform&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>coding</category>
      <category>tutorial</category>
      <category>codex</category>
    </item>
    <item>
      <title>Getting Started with Qwen3.5 Vision-Language Models</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Tue, 17 Mar 2026 16:00:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/getting-started-with-qwen35-vision-language-models-3ej3</link>
      <guid>https://forem.com/digitalocean/getting-started-with-qwen35-vision-language-models-3ej3</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally written by James Skelton (Senior AI/ML Technical Content Strategist II)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/visualizing-vision-language-models-multimodal-reasoning" rel="noopener noreferrer"&gt;Vision Language models&lt;/a&gt; are one of the most powerful and highest potential applications of deep learning technologies. The reasoning behind such a strong assertion lies in the versatility of VL modeling: from document understanding to object tracking to image captioning, vision language models are likely going to be the building blocks of the incipient, physical AI future. This is because everything that we can interact with that will be powered by AI - from robots to driverless vehicles to medical assistants - will likely have a VL model in its pipeline.&lt;/p&gt;

&lt;p&gt;This is why the power of open-source development is so important to all of these disciplines and applications of AI, and why we are so excited about the release of &lt;a href="https://qwen.ai/blog?id=qwen3.5" rel="noopener noreferrer"&gt;Qwen3.5&lt;/a&gt; from Qwen Team. This &lt;a href="https://huggingface.co/collections/Qwen/qwen35" rel="noopener noreferrer"&gt;suite of completely open source VL models&lt;/a&gt;, ranging in size from .8B to 397B (with activated 17B) parameters, is the clear next step forward for VL modeling. They excel at bench marks for everything from agentic coding to computer use to document understanding, and nearly match closed source rivals in terms of capabilities.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will examine and show how to make the best use of Qwen3.5 using a &lt;a href="https://www.digitalocean.com/products/gradient/gpu-droplets" rel="noopener noreferrer"&gt;Gradienttm GPU Droplet&lt;/a&gt;. Follow along for explicit instructions on how to setup and run your GPU Droplet to power Qwen3.5 to power applications like Claude Code and Codex using your own resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Qwen3.5 VL demonstrates the growing power of open &lt;a href="https://www.digitalocean.com/solutions/multimodal-ai" rel="noopener noreferrer"&gt;multimodal AI&lt;/a&gt;. The fully open-source model suite spans from 0.8B to 397B parameters and achieves strong benchmark performance across tasks like coding, document understanding, and computer interaction, approaching the capabilities of leading proprietary models.&lt;/li&gt;
&lt;li&gt;Its architecture enables efficient large-scale multimodal training. By decoupling vision and language parallelism strategies, using sparse activations, and employing an FP8 training pipeline, Qwen3.5 improves hardware utilization, reduces memory usage, and maintains high throughput even when training on mixed text, image, and video data.&lt;/li&gt;
&lt;li&gt;Developers can deploy Qwen3.5 on their own infrastructure. With tools like Ollama and GPU Droplets, it is possible to run large Qwen3.5 models locally or in the cloud to power applications such as coding assistants, computer-use agents, and custom AI tools without relying on proprietary APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Qwen3.5: Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3v5lob56ux6d9h1yzny.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3v5lob56ux6d9h1yzny.jpg" alt="image" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Qwen3.5 is a fascinating model suite with a unique architecture. It “enables efficient native multimodal training via a heterogeneous infrastructure that decouples parallelism strategies across vision and language components” (&lt;a href="https://qwen.ai/blog?id=qwen3.5" rel="noopener noreferrer"&gt;Source&lt;/a&gt;). This helps to make it avoid uniform approaches’ inefficiencies, such as over-allocating compute to lighter modalities, synchronization bottlenecks between vision and language towers, memory imbalance across devices, and reduced scaling efficiency when both modalities are forced into the same parallelism strategy.&lt;/p&gt;

&lt;p&gt;By leveraging sparse activations to enable overlapping computation across model components, the system reaches nearly the same training throughput as pure text-only baselines even when trained on mixed text, image, and video datasets. Alongside this, a native FP8 training pipeline applies low-precision computation to activations, Mixture-of-Experts (MoE) routing, and GEMM operations. Runtime monitoring dynamically preserves BF16 precision in numerically sensitive layers, reducing activation memory usage by roughly 50% and delivering more than a 10% training speed improvement while maintaining stable scaling to tens of trillions of tokens.&lt;/p&gt;

&lt;p&gt;To further leverage reinforcement learning at scale, the team developed an asynchronous RL framework capable of training Qwen3.5 models across all sizes, supporting text-only, multimodal, and multi-turn interaction settings. The system uses a fully disaggregated &lt;a href="https://www.digitalocean.com/community/tutorials/llm-inference-optimization" rel="noopener noreferrer"&gt;training–inference architecture&lt;/a&gt;, allowing training and rollout generation to run independently while improving hardware utilization, enabling dynamic load balancing, and supporting fine-grained fault recovery. Through techniques such as end-to-end FP8 training, rollout router replay, speculative decoding, and multi-turn rollout locking, the framework increases throughput while maintaining strong consistency between training and inference behavior.&lt;/p&gt;

&lt;p&gt;This system–algorithm co-design also constrains gradient staleness and reduces data skew during asynchronous updates, preserving both training stability and model performance. In addition, the framework is built to support agentic workflows natively, enabling uninterrupted multi-turn interactions within complex environments. Its decoupled architecture can scale to millions of concurrent agent scaffolds and environments, which helps improve generalization during training. Together, these optimizations produce a 3×–5× improvement in end-to-end training speed while maintaining strong stability, efficiency, and scalability (&lt;a href="https://qwen.ai/blog?id=qwen3.5" rel="noopener noreferrer"&gt;Source&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Qwen3.5 Demo
&lt;/h2&gt;

&lt;p&gt;Getting started with Qwen3.5 is very simple. Thanks to the foresight of Qwen Team &amp;amp; their collaborators, their are numerous ways to access and run the Qwen3.5 model suite’s models from your own machine. Of course, running the larger models will require significantly more computational resources. We recommend at least an 8x &lt;a href="https://www.digitalocean.com/community/tutorials/nvidia-h200-gpu-droplet" rel="noopener noreferrer"&gt;NVIDIA H200&lt;/a&gt; setup for the larger models in particular, though a single H200 is sufficient for this tutorial. We are going to use Ollama to power &lt;a href="https://huggingface.co/Qwen/Qwen3.5-122B-A10B" rel="noopener noreferrer"&gt;Qwen3.5-122B-A10B&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To get started, simply start up a GPU Droplet with an NVIDIA H200 with your &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server" rel="noopener noreferrer"&gt;SSH key&lt;/a&gt; attached, and SSH in using the terminal on your local machine. From there, navigate to the base directory of your choice. Create a new directory with &lt;code&gt;mkdir&lt;/code&gt; to represent your new workspace, and change into the directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a custom game with Qwen3.5 running on Ollama and Claude Code
&lt;/h3&gt;

&lt;p&gt;For this demo, we are going to do something simple: create a Python based video game for one of the most popular Winter Olympics sports: curling. To get started, paste the following code into the remote terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
ollama launch claude &lt;span class="nt"&gt;--model&lt;/span&gt; qwen3.5:122b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop1la5cjyv0riseeoleb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop1la5cjyv0riseeoleb.png" alt="image" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will launch Claude Code. If everything worked, it should look like above. From here, we can begin giving instructions to our model to begin generating code!&lt;/p&gt;

&lt;p&gt;For this demo, provide it with a base set of instructions. Try customizing the following input:&lt;/p&gt;

&lt;p&gt;“I want to create a simple game of curling in python code. i want it to be playable on my computer. Please create a sample Python program.&lt;/p&gt;

&lt;p&gt;Packages: pygame”&lt;/p&gt;

&lt;p&gt;This will give you, if your model ran predictably, a python file named something like “curling_game.py” with a full game’s code inside. Simply download this file onto your local computer, open the terminal and run it with &lt;code&gt;python3.11 curling_game.py&lt;/code&gt;. Our game looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5yrbeeqys9timusj8qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5yrbeeqys9timusj8qd.png" alt="image" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But looks are deceiving: this game is far from playable in the one-shot state. It requires serious work to amend the code to make the game playable, especially for two players. We can either use Claude Code with Qwen3.5 to make those adjustments, switch to an Anthropic Model like &lt;a href="https://www.digitalocean.com/community/tutorials/claude-sonnet" rel="noopener noreferrer"&gt;Sonnet 4.6&lt;/a&gt; or &lt;a href="https://www.digitalocean.com/community/tutorials/claude-opus" rel="noopener noreferrer"&gt;Opus 4.6&lt;/a&gt;, or make the changes manually. From this base state, it took Qwen3.5 over an hour and at least 10 requests to make the game playable. Time was notably constrained by the single H200 GPU deployment we used for this demo, but the code output leaves significant room for improvement nonetheless. We expect that Opus 4.6 could accomplish the same task in a much quicker time frame, given its optimization for &lt;a href="https://www.digitalocean.com/community/tutorials/claude-code-gpu-droplets-vscode" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, relatively superior benchmark scores, and more optimized infrastructure for inference.&lt;/p&gt;

&lt;p&gt;If you want to try it out, this file can be found on Github &lt;a href="https://gist.github.com/Jameshskelton/02be269e8d50f724cc910b35f6296e9c" rel="noopener noreferrer"&gt;Gist&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Qwen3.5 VL represents an important step forward for open-source multimodal AI, demonstrating that publicly available models can increasingly rival proprietary systems in capability while offering far greater flexibility for developers. With its scalable architecture, efficient training infrastructure, and strong performance across tasks like coding, document understanding, and computer use, the Qwen3.5 suite highlights the growing maturity of the open AI ecosystem. As tools like GPU Droplets and frameworks such as Ollama make deploying large models easier than ever, vision-language systems like Qwen3.5 are poised to become foundational components in the next generation of AI-powered applications and physical AI systems.&lt;/p&gt;

</description>
      <category>qwen</category>
      <category>learning</category>
      <category>aimodels</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>7 OpenClaw Security Challenges to Watch for in 2026</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Thu, 12 Mar 2026 16:00:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/7-openclaw-security-challenges-to-watch-for-in-2026-46b1</link>
      <guid>https://forem.com/digitalocean/7-openclaw-security-challenges-to-watch-for-in-2026-46b1</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally written by Fadeke Adegbuyi (Manager, Content Marketing)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OpenClaw isn’t just another chatbot wrapper. It executes shell commands, controls your browser, manages your calendar, reads and writes files, and remembers everything across sessions. The &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;project&lt;/a&gt; runs locally on your machine and connects to WhatsApp, Telegram, iMessage, Discord, Slack, and over a dozen other platforms via &lt;a href="https://openclaw.ai/integrations" rel="noopener noreferrer"&gt;pre-built integrations&lt;/a&gt;. It functions as a truly connected personal assistant. As a result, the use cases people have dreamed up for OpenClaw are wild.&lt;/p&gt;

&lt;p&gt;One user showed an OpenClaw agent &lt;a href="https://x.com/xmayeth/status/2020883912734425389" rel="noopener noreferrer"&gt;making money on Polymarket&lt;/a&gt; by monitoring news feeds and executing trades automatically. Another gave their bot access to &lt;a href="https://x.com/MatznerJon/status/2019044317621567811" rel="noopener noreferrer"&gt;home surveillance cameras&lt;/a&gt;. Someone else &lt;a href="https://x.com/nickvasiles/status/2021391007800328683" rel="noopener noreferrer"&gt;&lt;/a&gt;unleashed subagents to apply for &lt;a href="https://x.com/nickvasiles/status/2021391007800328683" rel="noopener noreferrer"&gt;UpWork freelancing jobs&lt;/a&gt; on their behalf.&lt;/p&gt;

&lt;p&gt;

&lt;iframe class="tweet-embed" id="tweet-2019044317621567811-81" src="https://platform.twitter.com/embed/Tweet.html?id=2019044317621567811"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-2019044317621567811-81');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2019044317621567811&amp;amp;theme=dark"
  }





&lt;/p&gt;

&lt;p&gt;But this kind of access to your digital life comes with real consequences when things go wrong. And things have gone wrong. Security researchers found that &lt;a href="https://www.404media.co/silicon-valleys-favorite-new-ai-agent-has-serious-security-flaws/" rel="noopener noreferrer"&gt;&lt;/a&gt;the agent shipped with &lt;a href="https://www.404media.co/silicon-valleys-favorite-new-ai-agent-has-serious-security-flaws/" rel="noopener noreferrer"&gt;serious flaws&lt;/a&gt; that made it possible for attackers to hijack machines with a single malicious link. Meanwhile, &lt;a href="https://www.digitalocean.com/resources/articles/what-is-moltbook" rel="noopener noreferrer"&gt;Moltbook&lt;/a&gt;, a Reddit-style platform with over 2.8 million AI agents, had its database completely &lt;a href="https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/" rel="noopener noreferrer"&gt;exposed&lt;/a&gt;, so anyone could take control of any AI agent on the platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;None of this means you should avoid OpenClaw entirely&lt;/strong&gt;. It means you should understand OpenClaw security challenges and take precautions before spinning up an agent with root access to your laptop. Running OpenClaw in an isolated cloud environment can help  neutralize some of these risks—DigitalOcean's &lt;a href="https://www.digitalocean.com/blog/moltbot-on-digitalocean" rel="noopener noreferrer"&gt;1-Click Deploy for OpenClaw&lt;/a&gt;, for example, handles authentication, firewall rules, and container isolation out of the box so your personal machine stays out of the equation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are OpenClaw security challenges?
&lt;/h2&gt;

&lt;p&gt;OpenClaw security challenges boil down to a design tension: the tool needs broad system permissions to be useful, but those permissions create a massive attack surface when something goes wrong. The agent runs with whatever privileges your user account has—full disk, terminal, and network access—by design.&lt;/p&gt;

&lt;p&gt;It's also &lt;a href="https://www.digitalocean.com/resources/articles/agentic-ai" rel="noopener noreferrer"&gt;agentic&lt;/a&gt; and self-improving, meaning it can modify its own behavior, update its memory, and install new skills autonomously. This is impressive from a capability standpoint, but another vector that can cause things to spiral when guardrails are missing. Pair that with defaults that skip authentication, an unvetted skill marketplace, and persistent memory storing weeks of context, and trouble follows. The takeaway: approach with caution, isolate from production systems, and carefully scrutinize the defaults.&lt;/p&gt;

&lt;p&gt;To his credit, OpenClaw creator &lt;a href="https://x.com/steipete" rel="noopener noreferrer"&gt;Peter Steinberger&lt;/a&gt; has been openly vocal about these risks and actively encourages running OpenClaw in a &lt;a href="https://docs.openclaw.ai/gateway/sandboxing" rel="noopener noreferrer"&gt;sandboxed environment&lt;/a&gt;, which isolates tool execution inside Docker containers to limit filesystem and process access when the model misbehaves. DigitalOcean's one-click deployment does exactly this out of the box, giving you that isolation without the manual setup.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/n2MrUtIT1m4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  7 OpenClaw security challenges to watch out for
&lt;/h2&gt;

&lt;p&gt;We've already seen a security audit &lt;a href="https://www.kaspersky.com/blog/openclaw-vulnerabilities-exposed/55263/" rel="noopener noreferrer"&gt;uncover 512 vulnerabilities&lt;/a&gt; (eight critical) and &lt;a href="https://thehackernews.com/2026/02/researchers-find-341-malicious-clawhub.html" rel="noopener noreferrer"&gt;malicious ClawHub skills&lt;/a&gt; stealing cryptocurrency wallets. None of these challenges are theoretical. They're all based on incidents that have already played out within weeks of OpenClaw’s launch.&lt;/p&gt;

&lt;p&gt;These are the challenges you need to have on your radar if you're experimenting with OpenClaw:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. One-click remote code execution through WebSocket hijacking
&lt;/h3&gt;

&lt;p&gt;One of the most alarming OpenClaw vulnerabilities discovered so far is &lt;a href="https://thehackernews.com/2026/02/openclaw-bug-enables-one-click-remote.html" rel="noopener noreferrer"&gt;CVE-2026-25253&lt;/a&gt;, a one-click remote code execution flaw that Mav Levin, a founding researcher at DepthFirst, disclosed in late January 2026. The attack worked because OpenClaw's local server didn’t validate the WebSocket origin header—so any website you visited could silently connect to your running agent. An attacker just needed you to click one link. From there, they chained a cross-site WebSocket hijack into full code execution on your machine. The compromise happened in milliseconds. This is the core danger of running an agent locally on the same machine you're browsing the web with—one careless click and an attacker is already inside.&lt;/p&gt;

&lt;p&gt;Levin's proof-of-concept showed that visiting a single malicious webpage was enough to steal authentication tokens and gain operator-level access to the gateway API—giving an attacker access to change your config, read your files, and run commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security checks&lt;/strong&gt;: In this instance, the fix landed in &lt;a href="https://github.com/openclaw/openclaw/releases" rel="noopener noreferrer"&gt;version 2026.1.29&lt;/a&gt;, so update immediately if you’re a version behind. Beyond that, best practices include avoiding running OpenClaw while browsing untrusted sites and considering putting the agent behind a reverse proxy with proper origin validation for an additional layer of protection.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Tens of thousands of unprotected OpenClaw instances sitting open on the internet
&lt;/h3&gt;

&lt;p&gt;Here's the thing about OpenClaw's early defaults: the agent trusted any connection from localhost without asking for a password. That sounded fine until the gateway sits behind a misconfigured reverse proxy—at which point every external request got forwarded to 127.0.0.1, and your agent thought the whole internet was a trusted local user. SecurityScorecard's STRIKE team &lt;a href="https://www.bitsight.com/blog/openclaw-ai-security-risks-exposed-instances" rel="noopener noreferrer"&gt;&lt;/a&gt;found over &lt;a href="https://www.bitsight.com/blog/openclaw-ai-security-risks-exposed-instances" rel="noopener noreferrer"&gt;30,000 internet-exposed OpenClaw instances&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Security researcher &lt;a href="https://x.com/theonejvo/status/2015401219746128322" rel="noopener noreferrer"&gt;Jamieson O'Reilly showed&lt;/a&gt; just how bad this gets. He accessed Anthropic API keys, Telegram bot tokens, Slack accounts, and complete chat histories from exposed instances, even sending messages on behalf of users and running commands with full admin privileges. No authentication required.&lt;/p&gt;

&lt;p&gt;This has since been addressed—&lt;a href="https://docs.openclaw.ai/gateway#runtime-model" rel="noopener noreferrer"&gt;gateway auth&lt;/a&gt; is now required by default, and the onboarding wizard auto-generates a token even for localhost.&lt;/p&gt;

&lt;p&gt;

&lt;iframe class="tweet-embed" id="tweet-2015401219746128322-801" src="https://platform.twitter.com/embed/Tweet.html?id=2015401219746128322"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-2015401219746128322-801');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2015401219746128322&amp;amp;theme=dark"
  }





&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security checks&lt;/strong&gt;: At a minimum, check whether your instance is reachable from the public internet. Use a &lt;a href="https://www.digitalocean.com/resources/articles/cloud-firewall" rel="noopener noreferrer"&gt;firewall&lt;/a&gt; to restrict access, enable gateway token authentication, and never expose the control plane without a &lt;a href="https://www.digitalocean.com/solutions/vpn" rel="noopener noreferrer"&gt;VPN&lt;/a&gt; or &lt;a href="https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys" rel="noopener noreferrer"&gt;SSH tunnel&lt;/a&gt; in front of it. This is a  case where a managed cloud deployment can solve the problem outright—because your personal API keys, chat histories, and credentials aren’t sitting on an exposed local machine in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Malicious skills on ClawHub are poisoning the supply chain
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/openclaw/clawhub" rel="noopener noreferrer"&gt;ClawHub&lt;/a&gt;, OpenClaw's public skill marketplace, lets anyone publish an extension—the only requirement is a GitHub account older than one week. That low bar has unfortunately turned the marketplace into a target. Koi Security &lt;a href="https://www.koi.ai/blog/clawhavoc-341-malicious-clawedbot-skills-found-by-the-bot-they-were-targeting" rel="noopener noreferrer"&gt;audited all 2,857 skills on ClawHub&lt;/a&gt; and found 341 that were outright malicious. Bitdefender's independent scan put the number closer to &lt;a href="https://www.bitdefender.com/en-us/blog/businessinsights/technical-advisory-openclaw-exploitation-enterprise-networks" rel="noopener noreferrer"&gt;900 malicious skills&lt;/a&gt;, roughly 20% of all packages. A single account—"hightower6eu"—uploaded 354 malicious packages by itself.&lt;/p&gt;

&lt;p&gt;The attack is clever. You install what looks like a useful skill and the documentation looks professional. But buried in a "Prerequisites" section, it asks you to install something first—and that something is Atomic Stealer (&lt;a href="https://www.darktrace.com/blog/atomic-stealer-darktraces-investigation-of-a-growing-macos-threat" rel="noopener noreferrer"&gt;AMOS&lt;/a&gt;), a macOS credential-stealing malware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security checks&lt;/strong&gt;: OpenClaw has since &lt;a href="https://openclaw.ai/blog/virustotal-partnership" rel="noopener noreferrer"&gt;partnered with VirusTotal&lt;/a&gt; to scan new skill uploads, but Steinberger himself admitted this isn't a silver bullet. At a minimum, before installing any skill, read its source code. Check the publisher's account age and history. Put simply, treat every skill as untrusted code running with your agent's full permissions. Unlike some exposure risks, malicious skills are a threat regardless of where OpenClaw runs—a poisoned skill executes the same way on a cloud server as it does on your laptop.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Credential storage in plaintext and API key leakage
&lt;/h3&gt;

&lt;p&gt;One of the less glamorous but more dangerous issues is how OpenClaw handles secrets. The platform &lt;a href="https://permiso.io/blog/inside-the-openclaw-ecosystem-ai-agents-with-privileged-credentials" rel="noopener noreferrer"&gt;stores credentials in plaintext&lt;/a&gt;—including API keys for your LLM provider and tokens for every messaging platform your agent connects to—and those become targets the moment your instance is accessible to anyone other than you. Prompt injection attacks can also trick the agent into exfiltrating credentials by embedding hidden instructions in content the agent processes.&lt;/p&gt;

&lt;p&gt;Cisco's team tested a skill called &lt;a href="https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare" rel="noopener noreferrer"&gt;"What Would Elon Do?"&lt;/a&gt; and surfaced nine security findings, two of them critical. The skill instructed the bot to execute a curl command sending data to an external server controlled by the skill's author. Functionally, it was malware hiding behind a joke name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security check&lt;/strong&gt;: At a minimum, rotate your API keys regularly and store secrets using environment variables or a dedicated secrets manager rather than config files. It's also worth setting spending limits on your LLM provider accounts. That way, even if a key is compromised, it can't rack up thousands in charges.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Prompt injection attacks amplified by persistent memory
&lt;/h3&gt;

&lt;p&gt;What makes prompt injection in OpenClaw worse than in a typical &lt;a href="https://www.digitalocean.com/resources/articles/ai-agent-vs-ai-chatbot" rel="noopener noreferrer"&gt;chatbot&lt;/a&gt; is the persistent memory. The agent retains long-term context, preferences, and conversation history across sessions—which is one of its best features. But it also means a malicious instruction embedded in a website, email, or document doesn't have to execute immediately. Palo Alto Networks warned that these become "&lt;a href="https://www.paloaltonetworks.com/blog/network-security/why-moltbot-may-signal-ai-crisis/" rel="noopener noreferrer"&gt;stateful, delayed-execution attacks&lt;/a&gt;". A hidden prompt in a PDF you opened last Tuesday could sit dormant in the agent's memory until a future task triggers it days later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security check&lt;/strong&gt;: There's no perfect fix for prompt injection right now; it's an unresolved problem in agentic AI. But you can reduce the blast radius by limiting what tools and permissions your agent has access to, segmenting its access to sensitive systems, and reviewing its memory and context periodically for anything unexpected.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Shadow AI spreading through enterprise networks
&lt;/h3&gt;

&lt;p&gt;This one's for anyone working at a company where developers tinker on their work machines. Token Security found that &lt;a href="https://www.token.security/blog/the-clawdbot-enterprise-ai-risk-one-in-five-have-it-installed" rel="noopener noreferrer"&gt;22% of their enterprise customers&lt;/a&gt; have employees running OpenClaw as shadow AI without IT approval. Bitdefender confirmed the same, showing &lt;a href="https://businessinsights.bitdefender.com/technical-advisory-openclaw-exploitation-enterprise-networks" rel="noopener noreferrer"&gt;employees deploying agents&lt;/a&gt; on corporate machines connected to internal networks. An OpenClaw agent on a developer's laptop with VPN access to production means every vulnerability above is now a business problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security check&lt;/strong&gt;: If you're on a security team, you should scan your network for OpenClaw instances now. Set up detection for its WebSocket traffic patterns, and mandate that any approved use runs in an isolated environment—a VM or cloud server—rather than on laptops with internal access. Giving teams an approved, isolated deployment path is the fastest way to get ahead of shadow AI—it's much easier to enforce guardrails when the alternative isn't 'don't use it at all.'&lt;/p&gt;

&lt;h3&gt;
  
  
  7. The Moltbook database breach exposing millions of agent credentials
&lt;/h3&gt;

&lt;p&gt;The security mess isn't limited to OpenClaw itself. Moltbook, the social network for AI agents built by &lt;a href="https://x.com/MattPRD" rel="noopener noreferrer"&gt;Matt Schlicht&lt;/a&gt;, &lt;a href="https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/" rel="noopener noreferrer"&gt;suffered a database exposure&lt;/a&gt; that cybersecurity firm Wiz discovered in early February. The database had zero access controls. Anyone who found it could view 1.5 million API tokens, 35,000 email addresses, and private messages between agents—enough to take control of any agent on the platform. China's Ministry of Industry and Information Technology &lt;a href="https://www.reuters.com/world/china/china-warns-security-risks-linked-openclaw-open-source-ai-agent-2026-02-05/" rel="noopener noreferrer"&gt;issued a formal warning&lt;/a&gt; about OpenClaw security risks, citing incidents like this breach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security check&lt;/strong&gt;: If you've used Moltbook, rotate every API key and token associated with your agent. Treat third-party platforms in the OpenClaw ecosystem with the same skepticism you'd apply to any new service asking for your credentials and consider additional security checks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Any references to third-party companies, trademarks, or logos in this document are for informational purposes only and do not imply any affiliation with, sponsorship by, or endorsement of those third parties.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pricing and product information accurate as of February 2026.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>security</category>
      <category>learning</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Tue, 10 Mar 2026 18:07:19 +0000</pubDate>
      <link>https://forem.com/digitalocean_staff/-2im1</link>
      <guid>https://forem.com/digitalocean_staff/-2im1</guid>
      <description>&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm" class="crayons-story__hidden-navigation-link"&gt;GPU Programming for Beginners: ROCm + AMD Setup to Edge Detection&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/digitalocean"&gt;
            &lt;img alt="DigitalOcean logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F175%2F369f1227-0eac-4a88-8d3c-08851bf0b117.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/digitalocean_staff" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F64516%2Fa0c9989b-6d18-46c7-bc66-4c2c1580534e.jpg" alt="digitalocean_staff profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/digitalocean_staff" class="crayons-story__secondary fw-medium m:hidden"&gt;
              DigitalOcean
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                DigitalOcean
                
              
              &lt;div id="story-author-preview-content-3318030" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/digitalocean_staff" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F64516%2Fa0c9989b-6d18-46c7-bc66-4c2c1580534e.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;DigitalOcean&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/digitalocean" class="crayons-story__secondary fw-medium"&gt;DigitalOcean&lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Mar 10&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm" id="article-link-3318030"&gt;
          GPU Programming for Beginners: ROCm + AMD Setup to Edge Detection
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/gpu"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;gpu&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/amd"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;amd&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/programming"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;programming&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;1&lt;span class="hidden s:inline"&gt; reaction&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;




</description>
      <category>gpu</category>
      <category>amd</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>GPU Programming for Beginners: ROCm + AMD Setup to Edge Detection</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Tue, 10 Mar 2026 16:00:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm</link>
      <guid>https://forem.com/digitalocean/gpu-programming-for-beginners-rocm-amd-setup-to-edge-detection-29bm</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/TdHexc0Garg"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;In this hands-on tutorial, we demystify GPU computation and show you how to write your own GPU programs from scratch. Understanding GPU programming is essential for anyone looking to grasp why AI models depend on this specialized hardware.&lt;/p&gt;

&lt;p&gt;We'll use ROCm and HIP (AMD's version of CUDA) to take you from zero to running real GPU code, culminating in a computer vision edge detector that processes images in parallel.&lt;/p&gt;

&lt;p&gt;You can find the code in the &lt;strong&gt;project repository&lt;/strong&gt;: &lt;a href="https://github.com/oconnoob/intro_to_rocm_hip/blob/main/README.md" rel="noopener noreferrer"&gt;https://github.com/oconnoob/intro_to_rocm_hip/blob/main/README.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👇 WHAT YOU'LL LEARN IN THIS VIDEO 👇&lt;/p&gt;

&lt;p&gt;🔧 &lt;strong&gt;Getting Set Up with ROCm Two ways to get started&lt;/strong&gt;: spin up a GPU Droplet on DigitalOcean with ROCm pre-installed, or install ROCm yourself on an Ubuntu system with an AMD GPU. We cover both methods step-by-step.&lt;/p&gt;

&lt;p&gt;➕ &lt;strong&gt;Example 1&lt;/strong&gt;: Vector Addition (The Basics) Learn the fundamental structure of GPU programs—kernels, threads, blocks, and memory management. We'll add one million elements in parallel and verify our results.&lt;/p&gt;

&lt;p&gt;⚡ &lt;strong&gt;Example 2&lt;/strong&gt;: Matrix Multiplication (Why Libraries Matter) Discover why optimized libraries like rocBLAS dramatically outperform naive implementations. This is the operation powering most AI models you use daily.&lt;/p&gt;

&lt;p&gt;👁️ &lt;strong&gt;Example 3&lt;/strong&gt;: Edge Detection with Sobel Filter (The Cool Stuff) Apply your GPU programming skills to a real computer vision problem—detecting edges in images using a classic Sobel filter, all running massively parallel on the GPU.&lt;/p&gt;

&lt;p&gt;Whether you're an AI enthusiast wanting to understand the hardware layer or a developer looking to harness GPU compute power, this tutorial gives you the foundation to start writing efficient parallel programs.&lt;/p&gt;

</description>
      <category>gpu</category>
      <category>amd</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>In case you haven't heard, we're back! Follow the DigitalOcean organization for updates, tutorials, and hands-on AI learning.</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Fri, 06 Mar 2026 22:25:26 +0000</pubDate>
      <link>https://forem.com/digitalocean_staff/in-case-you-havent-heard-were-back-follow-the-digitalocean-organization-for-updates-tutorials-53oj</link>
      <guid>https://forem.com/digitalocean_staff/in-case-you-havent-heard-were-back-follow-the-digitalocean-organization-for-updates-tutorials-53oj</guid>
      <description>&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c" class="crayons-story__hidden-navigation-link"&gt;DigitalOcean on Dev.to: Practical AI Insights for Builders&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/digitalocean"&gt;
            &lt;img alt="DigitalOcean logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F175%2F369f1227-0eac-4a88-8d3c-08851bf0b117.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/jlulks" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3476605%2F8f9c9b3a-5b45-42b8-88ca-4f557174dba7.jpg" alt="jlulks profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/jlulks" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Jess Lulka
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Jess Lulka
                
              
              &lt;div id="story-author-preview-content-3222465" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/jlulks" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3476605%2F8f9c9b3a-5b45-42b8-88ca-4f557174dba7.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Jess Lulka&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/digitalocean" class="crayons-story__secondary fw-medium"&gt;DigitalOcean&lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Feb 2&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c" id="article-link-3222465"&gt;
          DigitalOcean on Dev.to: Practical AI Insights for Builders
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/digitalocean"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;digitalocean&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/machinelearning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;machinelearning&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/learning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;learning&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;22&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/digitalocean/digitalocean-on-devto-practical-ai-insights-for-builders-3g0c#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              1&lt;span class="hidden s:inline"&gt; comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;




</description>
      <category>ai</category>
      <category>digitalocean</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>We're DigitalOcean and we're excited to be here with you! </title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Thu, 23 Jul 2020 11:47:06 +0000</pubDate>
      <link>https://forem.com/digitalocean/we-re-digitalocean-and-we-re-excited-to-be-here-with-you-33hc</link>
      <guid>https://forem.com/digitalocean/we-re-digitalocean-and-we-re-excited-to-be-here-with-you-33hc</guid>
      <description>&lt;p&gt;Hey everyone! We're so excited to be here at CodeLand:Distributed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://digitalocean.com" rel="noopener noreferrer"&gt;DigitalOcean&lt;/a&gt; offers the most easy-to-use and developer-friendly cloud platform. We help you manage and scale apps with an intuitive API, multiple storage options, integrated firewalls load balancers, and more. We're on a mission to simplify cloud computing so developers and businesses can spend more time creating software that changes the world!&lt;/p&gt;

&lt;h3&gt;
  
  
  Got Questions?! Let's chat!
&lt;/h3&gt;

&lt;p&gt;Sammy the shark and our team members are here to connect with any questions you might have. We'll be at our &lt;a href="https://dev.to/join_channel_invitation/digitalocean-5eag?invitation_slug=invitation-link-e9804f"&gt;DEV Connect channel&lt;/a&gt; all day, so stop by and say hello!&lt;/p&gt;

&lt;p&gt;We're also happy to respond to any comments down below. 👇&lt;/p&gt;

&lt;h3&gt;
  
  
  Digital Swag
&lt;/h3&gt;

&lt;p&gt;Today, we're offering all CodeLand attendees a $100 USD free trial. Sign up below and we'll follow up with all the details: &lt;/p&gt;


&lt;div class="ltag__user-subscription-tag"&gt;
  &lt;div class="ltag__user-subscription-tag__container"&gt;

    &lt;div class="ltag__user-subscription-tag__content w-100"&gt;

      &lt;div class="ltag__user-subscription-tag__profile-images signed-out"&gt;

        &lt;span class="crayons-avatar crayons-avatar--xl ltag__user-subscription-tag__author-profile-image m-auto"&gt;
          &lt;img class="crayons-avatar__image ltag__user-subscription-tag__author-profile-image m-0" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F64516%2Fa0c9989b-6d18-46c7-bc66-4c2c1580534e.jpg"&gt;
        &lt;/span&gt;

        &lt;span class="crayons-avatar crayons-avatar--xl ltag__user-subscription-tag__subscriber-profile-image m-auto"&gt;
          &lt;img class="crayons-avatar__image ltag__user-subscription-tag__subscriber-profile-image m-0" alt=""&gt;
        &lt;/span&gt;

      &lt;/div&gt;

      &lt;h2 class="ltag__user-subscription-tag__cta-text fs-xl mt-0 mb-4 align-center"&gt;
        Sign up for a $100 DigitalOcean Promo!
      &lt;/h2&gt;

      &lt;div class="ltag__user-subscription-tag__subscription-area align-center"&gt;
        &lt;div class="ltag__user-subscription-tag__signed-out"&gt;
          &lt;div class="fs-base mb-2"&gt;
            You must first sign in to DEV Community.
          &lt;/div&gt;
          &lt;a href="/enter" class="c-cta c-cta--default"&gt;
            Sign In
          &lt;/a&gt;
        &lt;/div&gt;

        &lt;div class="ltag__user-subscription-tag__signed-in hidden"&gt;
          
            Subscribe
          
          &lt;div class="ltag__user-subscription-tag__logged-in-text fs-s mb-3"&gt;
            You'll subscribe with the email address associated with your DEV Community account. To use a different email address, you can &lt;a href="/settings"&gt;update your email address in Settings&lt;/a&gt;.
          &lt;/div&gt;
        &lt;/div&gt;

        &lt;div class="ltag__user-subscription-tag__apple-auth fs-s hidden"&gt;
          Subscribe
          &lt;div class="fs-s"&gt;
            Hey, there! It looks like when you created your DEV Community account you signed up with Apple using a private relay email address. If you'd like to subscribe, please &lt;a href="/settings"&gt;update your email address in Settings&lt;/a&gt; first to a different email address.
          &lt;/div&gt;
        &lt;/div&gt;

        &lt;div class="ltag__user-subscription-tag__response-message crayons-notice fs-base w-100 hidden"&gt;&lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="user-subscription-confirmation-modal hidden"&gt;
      &lt;div class="crayons-modal__box__body"&gt;
        &lt;p class="fs-base mb-4 mt-0"&gt;
          You'll share your email address, username, name, and DEV Community profile URL with &lt;span class="ltag__user-subscription-tag__author-username fw-medium"&gt;digitalocean_staff&lt;/span&gt;. Once you do this, you cannot undo this.
        &lt;/p&gt;

&lt;div class="ltag__user-subscription-tag__confirmation-buttons"&gt;
          
            Confirm subscription
          
          
            Cancel
          
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;We also have some &lt;a href="https://imgur.com/a/q6i58" rel="noopener noreferrer"&gt;fun wallpapers&lt;/a&gt; for anyone looking to spruce up their desktop or virtual backgrounds. ✨&lt;/p&gt;

&lt;h3&gt;
  
  
  Job Opportunities at DigitalOcean
&lt;/h3&gt;

&lt;p&gt;DigitalOcean is a values-driven organization. Here is what we believe in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our community is bigger than just us&lt;/li&gt;
&lt;li&gt;Simplicity in all we DO&lt;/li&gt;
&lt;li&gt;We speak up when we have something to say and listen when others DO&lt;/li&gt;
&lt;li&gt;e are accountable to deliver on our commitments&lt;/li&gt;
&lt;li&gt;Love is at our core&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Come swim with us: &lt;a href="https://do.co/careers" rel="noopener noreferrer"&gt;https://do.co/careers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp0xus8z4qagtrikazlv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp0xus8z4qagtrikazlv9.png" alt="developer-community"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>codeland</category>
    </item>
    <item>
      <title>How to Code in Go eBook</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Mon, 22 Jun 2020 15:46:31 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-code-in-go-ebook-ifl</link>
      <guid>https://forem.com/digitalocean/how-to-code-in-go-ebook-ifl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to the eBook
&lt;/h2&gt;

&lt;p&gt;This book is designed to introduce you to writing programs with the Go programming language. You’ll learn how to write useful tools and applications that can run on remote servers, or local Windows, macOS, and Linux systems for development.&lt;/p&gt;

&lt;p&gt;This book is based on the &lt;a href="https://www.digitalocean.com/community/tutorial_series/how-to-code-in-go"&gt;How To Code in Go&lt;/a&gt; tutorial series found on &lt;a href="https://www.digitalocean.com/community"&gt;DigitalOcean Community&lt;/a&gt;. The topics that it covers include how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install and set up a local Go development environment on Windows, macOS, and Linux systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Design your programs with conditional logic, including switch statements to control program flow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define your own data structures and create interfaces to them for reusable code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write custom error handling functions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building and installing your Go programs so that they can run on different operating systems and different CPU architectures&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using flags to pass arguments to your programs, to override default options&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each chapter can be read on its own or used as a reference, or you can follow the chapters from beginning to end. Feel free to jump to the chapter or chapters that best suits your purpose as you are learning Go with this book.&lt;/p&gt;

&lt;h2&gt;
  
  
  Download the eBook
&lt;/h2&gt;

&lt;p&gt;You can download the eBook in either the EPUB or PDF format by following the links below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://do.co/go-book-epub"&gt;&lt;em&gt;How To Code in Go&lt;/em&gt; eBook in &lt;strong&gt;EPUB format&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://do.co/go-book-pdf"&gt;&lt;em&gt;How To Code in Go&lt;/em&gt; eBook in &lt;strong&gt;PDF format&lt;/strong&gt;&lt;/a&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you’re finished this book, if you’d like to learn more about how to build tools and applications with Go, visit the DigitalOcean Community’s &lt;a href="https://www.digitalocean.com/community/tags/go"&gt;Go section&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>tutorial</category>
      <category>beginners</category>
      <category>ebook</category>
    </item>
    <item>
      <title>How to Code in Go (Tutorial Series)</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Mon, 30 Dec 2019 17:36:30 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-code-in-go-32p0</link>
      <guid>https://forem.com/digitalocean/how-to-code-in-go-32p0</guid>
      <description>&lt;p&gt;Go (or GoLang) is a modern programming language originally developed by Google that uses high-level syntax similar to scripting languages. It is popular for its minimal syntax and innovative handling of concurrency, as well as for the tools it provides for building native binaries on foreign platforms.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-go-and-set-up-a-local-programming-environment-on-ubuntu-18-04"&gt;How To Install Go and Set Up a Local Programming Environment on Ubuntu 18.04&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go is a programming language that was designed for fast compilation, ease of programming, and efficient execution in production. This tutorial will guide you through installing and configuring a programming workspace with Go via the command line on Ubuntu 18.04.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-go-and-set-up-a-local-programming-environment-on-macos"&gt;How To Install Go and Set Up a Local Programming Environment on macOS&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go is a programming language that was designed for fast compilation, ease of programming, and efficient execution in production. This tutorial will guide you through installing and configuring a programming workspace with Go via the command line on macOS.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-go-and-set-up-a-local-programming-environment-on-windows-10"&gt;How To Install Go and Set Up a Local Programming Environment on Windows 10&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go is a programming language that was designed for fast compilation, ease of programming, and efficient execution in production. This tutorial will guide you through installing and configuring a programming workspace with Go via the command line on Windows 10.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-write-your-first-program-in-go"&gt;How To Write Your First Program in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;The “Hello, World!” program is a classic and time-honored tradition in computer programming. It's a simple and complete first program for beginners, and it's a good way to make sure your environment is properly configured. This tutorial will walk you through creating this program in Go.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-the-gopath"&gt;Understanding the GOPATH&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;This article will walk you through understanding what the &lt;code&gt;GOPATH&lt;/code&gt; is, how it works, and how to set it up. This is a crucial step for setting up a Go development environment, as well as understanding how Go finds, installs, and builds source files.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-write-comments-in-go"&gt;How To Write Comments in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Comments are lines that exist in computer programs that are ignored by compilers and interpreters. Including comments in programs makes code more readable for humans as it provides some information or explanation about what each part of a program is doing. In this article, you'll learn how to work with comments in Go.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-data-types-in-go"&gt;Understanding Data Types in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Data types specify the kinds of values that particular variables will store when you are writing a program. The data type also determines what operations can be performed on the data. In this article, we will go over the important data types native to the Go programming language. Understanding some basic data types will enable you to write clearer code that performs efficiently.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-working-with-strings-in-go"&gt;An Introduction to Working with Strings in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;This Go tutorial will go over the basics of working with strings, including how to create and print strings, concatenate and replicate strings, and store strings in variables.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-format-strings-in-go"&gt;How To Format Strings in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In this tutorial, we’ll go over some of the ways we can work with Go strings to make sure that all output text is formatted correctly. Topics we will cover include: quotes, apostrophes, multiple lines, escape characters, and raw strings.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-the-strings-package-in-go"&gt;An Introduction to the Strings Package in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go's string package has several functions available to work with the string data type. These functions let us easily modify and manipulate...&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-variables-and-constants-in-go"&gt;How To Use Variables and Constants in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Variables are an important programming concept to master. They are symbols that stand in for a value you’re using in a program. This tutorial will cover some variable basics and best practices for using them within the Go programs you create.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-convert-data-types-in-go"&gt;How To Convert Data Types in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In Go, data types are used to classify one particular type of data, determining the values that you can assign to the type and the operations you can perform on it. When programming, there are times you will need to convert values between types in order to manipulate values in a different way. This tutorial will guide you through converting numbers and strings, as well as provide examples to help familiarize yourself with different use cases.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-do-math-in-go-with-operators"&gt;How To Do Math in Go with Operators&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Effectively performing mathematical operations in programming is an important skill to develop because of how frequently you’ll work with numbers. This tutorial will review operators that we can use with the integer and float data types in Go.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-boolean-logic-in-go"&gt;Understanding Boolean Logic in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;The Boolean data type can be one of two values, either True or False. We use Booleans in programming to make comparisons and to control the flow of the program. In this tutorial, we’ll go over the basics you’ll need to understand how Booleans work in Go, including Boolean comparison and logical operators, and truth tables.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-maps-in-go"&gt;Understanding Maps in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Most modern programming languages have the concept of a dictionary or a hash type. These types are commonly used to store data in pairs with a key that maps to a value. In Go, the map is what most programmers would think of as the dictionary type. It maps keys to values, making key-value pairs that are a useful way to store data in Go. Understand how Go maps work in this article.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-arrays-and-slices-in-go"&gt;Understanding Arrays and Slices in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;This article will cover the array and slice data structures in the Go Programming language, which will provide you with the necessary information to make the appropriate choice when choosing between them. You'll also review the most common ways to declare and work with both arrays and slices. The tutorial will first provide a description of arrays and how to manipulate them, followed by an explanation of slices and how they differ.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/handling-errors-in-go"&gt;Handling Errors in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Robust code needs to react correctly to unexpected circumstances like bad user input, faulty network connections, and failing disks. Error handling is the process of identifying when your program is in an unexpected state, and taking steps to record diagnostic information for later debugging.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/creating-custom-errors-in-go"&gt;Creating Custom Errors in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;When communicating more complicated error information to your users, or to your future self when debugging, sometimes these two mechanisms are not enough to adequately capture and report what has happened. To convey this more complex error information we can implement the standard library interface type, &lt;code&gt;error&lt;/code&gt;, to get more functionality.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/handling-panics-in-go"&gt;Handling Panics in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Panics are unforeseeable errors that will spontaneously terminate and exit a running Go program. Common mistakes are often responsible for creating panics. In this tutorial, we'll examine a few ways that common operations can produce panics in Go, and we'll also see ways to avoid those panics. We'll also use defer statements along with the recover function to capture panics before they have a chance to unexpectedly terminate our running Go programs.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/importing-packages-in-go"&gt;Importing Packages in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Making use of packages allows us to make our programs more robust and powerful. This tutorial will walk you through installing, importing, and aliasing packages.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-write-packages-in-go"&gt;How To Write Packages in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go packages are directories that consist of Go code. This tutorial will guide you through writing Go packages for use within other programming files.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-package-visibility-in-go"&gt;Understanding Package Visibility in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Visibility in the Go programming language means the file space from which a package or other construct can be referenced. In this article, you will learn how to control package visibility, as well as how to protect parts of your code that should only be used inside your package. To do this, we will create a basic logger to log and debug messages, using packages with varying degrees of item visibility.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-write-conditional-statements-in-go"&gt;How To Write Conditional Statements in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Conditional statements are part of every programming language. With conditional statements, we can have code that sometimes runs and at other times does not run, depending on the conditions of the program at that time. This tutorial will take you through writing conditional statements in the Go programming language.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-write-switch-statements-in-go"&gt;How To Write Switch Statements in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;switch&lt;/code&gt; is an alternative conditional statement useful for communicating actions taken by your Go programs when presented with different options. Everything we can write with the &lt;code&gt;switch&lt;/code&gt; statement can also be written with if statements. We'll look at a few examples of what the &lt;code&gt;switch&lt;/code&gt; statement can do, the if statements it replaces, and where it's most appropriately applied.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-construct-for-loops-in-go"&gt;How To Construct For Loops in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In the Go programming language, a &lt;code&gt;for&lt;/code&gt; loop implements the repeated execution of code based on a loop counter or loop variable. In this tutorial, you will learn how Go’s &lt;code&gt;for&lt;/code&gt; loop works, including the three major variations of its use: ForClause, Condition, and RangeClause. We'll start by showing how to create different types of &lt;code&gt;for&lt;/code&gt; loops, followed by how to loop through sequential data types in Go. We'll end by explaining how to use nested loops.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/using-break-and-continue-statements-when-working-with-loops-in-go"&gt;Using Break and Continue Statements When Working with Loops in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Using &lt;strong&gt;for loops&lt;/strong&gt; in Go allow you to automate and repeat tasks in an efficient manner. Learning how to control the operation and flow of loops will allow for customized logic in your program. You can control your loops with the &lt;code&gt;break&lt;/code&gt; and &lt;code&gt;continue&lt;/code&gt; statements.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-define-and-call-functions-in-go"&gt;How To Define and Call Functions in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;A function is a section of code that, once defined, can be reused. Functions are used to make your code easier to understand by breaking it into small, understandable tasks that can be used more than once throughout your program. In this tutorial, we’ll go over how to define your own functions to use in your coding projects.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-variadic-functions-in-go"&gt;How To Use Variadic Functions in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;A variadic function is a function that accepts zero, one, or more values as a single argument. While variadic functions are not the common case, they can be used to make your code cleaner and more readable.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-defer-in-go"&gt;Understanding defer in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go has many of the common control flow keywords found in other programming languages such as &lt;code&gt;if&lt;/code&gt;, &lt;code&gt;switch&lt;/code&gt;, &lt;code&gt;for&lt;/code&gt;, etc. One keyword that isn't found in most other programming languages is defer, and though it's less common you'll quickly see how useful it can be in your programs. In this article we will learn how to properly use the defer statement for cleaning up resources as well as several common mistakes that are made when using defer.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-init-in-go"&gt;Understanding init in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In Go, the predefined init() function sets off a piece of code to run before any other part of your package. This code will execute as soon as the package is imported, and can be used when you need your application to initialize in a specific state. In this tutorial, you'll learn how init() is used for the setup and initialization of specific package variables, one time computations, and the registration of a package for use with another package.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/customizing-go-binaries-with-build-tags"&gt;Customizing Go Binaries with Build Tags&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In Go, a build tag, or a build constraint, is an identifier added to a piece of code that determines when the file should be included in a package during the build process. This allows you to build different versions of your Go application from the same source code and to toggle between them in a fast and organized manner. In this article, you will use build tags in Go to generate different executable binaries that offer Free, Pro, and Enterprise feature sets of a sample application.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-pointers-in-go"&gt;Understanding Pointers in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;When writing software in Go you'll be writing functions and methods. You pass data to these functions as arguments. Sometimes, the function needs a local copy of the data, and you want the original to remain unchanged. In this article, you will learn how to create and use pointers to share access to the memory space for a variable.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/defining-structs-in-go"&gt;Defining Structs in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Structs allow storing data from several variables in a single entity with one name. They allow Go developers to describe the world in which a Go program operates. Instead of reasoning about strings describing a &lt;code&gt;Street&lt;/code&gt;, &lt;code&gt;City&lt;/code&gt;, or a &lt;code&gt;PostalCode&lt;/code&gt;, structs allow us to instead talk about an &lt;code&gt;Address&lt;/code&gt;. They also serve as a natural nexus for documentation. Structs can be defined and used in a few different ways, which are discussed in this tutorial.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/defining-methods-in-go"&gt;Defining Methods in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Methods are Go functions that operate on instances of a specific type. Methods allow you to communicate not only what the data is, but also how that data should be used. Methods are the core concept that makes Go interfaces possible.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-build-and-install-go-programs"&gt;How To Build and Install Go Programs&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In Go, distributing or deploying your application requires you to build your code into a shareable binary executable. To do this, you can use the Go toolchain to build and install your program. In this tutorial, you will use the Go toolchain to run, build, and install a sample Hello, World! program, allowing you to use, distribute, and deploy future applications effectively.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-struct-tags-in-go"&gt;How To Use Struct Tags in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Struct tags are small pieces of metadata attached to fields of a struct that provide instructions to other Go code that works with the struct. When you read information from systems such as databases, or APIs, you can use struct tags to control how this information is assigned to the fields of a struct.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-interfaces-in-go"&gt;How To Use Interfaces in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In this article, we will learn how to compose custom types that have common behaviors, which will allow us to reuse our code. You'll also learn how to implement interfaces for your own custom types that will satisfy interfaces defined from another package.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/building-go-applications-for-different-operating-systems-and-architectures"&gt;Building Go Applications for Different Operating Systems and Architectures&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Go supports cross-platform compiling by building support for multiple platforms directly into the go build tool. By using the GOOS and GOARCH environment variables and build tags, you can control which OS and architecture your final binary is built for. In this tutorial, you will build binaries for multiple operating systems and system architectures on your own system.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/using-ldflags-to-set-version-information-for-go-applications"&gt;Using ldflags to Set Version Information for Go Applications&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In this tutorial, you will use the Go flag -ldflags to change the value of variables at build time and introduce your own dynamic information into a binary, using a sample application that prints version information to the screen. This passes a flag to the underlying Go toolchain linker, cmd/link, that allows you to change the values of imported packages at build time from the command line.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-the-flag-package-in-go"&gt;How To Use the Flag Package in Go&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;In this tutorial you'll explore various ways to use the &lt;code&gt;flag&lt;/code&gt; package to build different kinds of command-line utilities. You'll use a flag to control program output, introduce positional arguments where you mix flags and other data, and then implement sub-commands.&lt;/p&gt;

</description>
      <category>go</category>
      <category>tutorial</category>
      <category>beginners</category>
      <category>series</category>
    </item>
    <item>
      <title>How To Configure Nginx as a Web Server and Reverse Proxy for Apache on One Ubuntu 18.04 Server</title>
      <dc:creator>DigitalOcean</dc:creator>
      <pubDate>Fri, 13 Jul 2018 21:42:00 +0000</pubDate>
      <link>https://forem.com/digitalocean/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-1804-server-2eib</link>
      <guid>https://forem.com/digitalocean/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-1804-server-2eib</guid>
      <description>&lt;p&gt;&lt;em&gt;By Jesin A&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The author selected the &lt;a href="https://www.brightfunds.org/organizations/electronic-frontier-foundation-inc" rel="noopener noreferrer"&gt;Electronic Frontier Foundation&lt;/a&gt; to receive a donation as part of the &lt;a href="https://www.digitalocean.com/write-for-donations/?utm_source=devto&amp;amp;utm_medium=display&amp;amp;utm_campaign=Devto_2018_Brand" rel="noopener noreferrer"&gt;Write for DOnations&lt;/a&gt; program.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Apache and Nginx are two popular open-source web servers often used with PHP. It can be useful to run both of them on the same virtual machine when hosting multiple websites which have varied requirements. The general solution for running two web servers on a single system is to either use multiple IP addresses or different port numbers.&lt;/p&gt;

&lt;p&gt;Servers which have both IPv4 and IPv6 addresses can be configured to serve Apache sites on one protocol and Nginx sites on the other, but this isn't currently practical, as IPv6 adoption by ISPs is still not widespread. Having a different port number like &lt;code&gt;81&lt;/code&gt; or &lt;code&gt;8080&lt;/code&gt; for the second web server is another solution, but sharing URLs with port numbers (such as &lt;code&gt;http://example.com:81&lt;/code&gt;) isn't always reasonable or ideal.&lt;/p&gt;

&lt;p&gt;In this tutorial you'll configure Nginx as both a web server and as a reverse proxy for Apache – all on a single server.&lt;/p&gt;

&lt;p&gt;Depending on the web application, code changes might be required to keep Apache reverse-proxy-aware, especially when SSL sites are configured. To avoid this, you will install an Apache module called &lt;code&gt;mod_rpaf&lt;/code&gt; which rewrites certain environment variables so it appears that Apache is directly handling requests from web clients.&lt;/p&gt;

&lt;p&gt;We will host four domain names on one server. Two will be served by Nginx: &lt;code&gt;example.com&lt;/code&gt; (the default virtual host) and &lt;code&gt;sample.org&lt;/code&gt;. The remaining two, &lt;code&gt;foobar.net&lt;/code&gt; and &lt;code&gt;test.io&lt;/code&gt;, will be served by Apache. We'll also configure Apache to serve PHP applications using PHP-FPM, which offers better performance over &lt;code&gt;mod_php&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To complete this tutorial, you'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new Ubuntu 18.04 server configured by following the &lt;a href="https://dev.to/maestromac/initial-server-setup-with-ubuntu-1804-45if-temp-slug-1879821"&gt;Initial Server Setup with Ubuntu 18.04&lt;/a&gt;, with a sudo non-root user and a firewall.&lt;/li&gt;
&lt;li&gt;Four fully-qualified domain names configured to point to your server's IP address. See Step 3 of &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean?utm_source=devto&amp;amp;utm_medium=display&amp;amp;utm_campaign=Devto_2018_Brand" rel="noopener noreferrer"&gt;How To Set Up a Host Name with DigitalOcean&lt;/a&gt; for an example of how to do this. If you host your domains' DNS elsewhere, you should create appropriate A records there instead.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1 — Installing Apache and PHP-FPM
&lt;/h2&gt;

&lt;p&gt;Let's start by installing Apache and PHP-FPM.&lt;/p&gt;

&lt;p&gt;In addition to Apache and PHP-FPM, we will also install the PHP FastCGI Apache module, &lt;code&gt;libapache2-mod-fastcgi&lt;/code&gt;, to support FastCGI web applications.&lt;/p&gt;

&lt;p&gt;First, update your package list to ensure you have the latest packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install the Apache and PHP-FPM packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install apache2 php-fpm

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The FastCGI Apache module isn't available in Ubuntu's repository so download it from &lt;a href="https://kernel.org" rel="noopener noreferrer"&gt;kernel.org&lt;/a&gt; and install it using the &lt;code&gt;dpkg&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
wget https://mirrors.edge.kernel.org/ubuntu/pool/multiverse/liba/libapache-mod-fastcgi/libapache2-mod-fastcgi_2.4.7~0910052141-1.2_amd64.deb

sudo dpkg -i libapache2-mod-fastcgi_2.4.7~0910052141-1.2_amd64.deb

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's change Apache's default configuration to use PHP-FPM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 — Configuring Apache and PHP-FPM
&lt;/h2&gt;

&lt;p&gt;In this step we will change Apache's port number to &lt;code&gt;8080&lt;/code&gt; and configure it to work with PHP-FPM using the &lt;code&gt;mod_fastcgi&lt;/code&gt; module. Rename Apache's &lt;code&gt;ports.conf&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv /etc/apache2/ports.conf /etc/apache2/ports.conf.default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new &lt;code&gt;ports.conf&lt;/code&gt; file with the port set to &lt;code&gt;8080&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Listen 8080" | sudo tee /etc/apache2/ports.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Web servers are generally set to listen on &lt;code&gt;127.0.0.1:8080&lt;/code&gt; when configuring a reverse proxy but doing so would set the value of PHP's environment variable &lt;strong&gt;SERVER_ADDR&lt;/strong&gt; to the loopback IP address instead of the server's public IP. Our aim is to set up Apache in such a way that its websites do not see a reverse proxy in front of it. So, we will configure it to listen on &lt;code&gt;8080&lt;/code&gt; on all IP addresses.&lt;/p&gt;

&lt;p&gt;Next we'll create a virtual host file for Apache. The &lt;code&gt;&amp;lt;VirtualHost&amp;gt;&lt;/code&gt; directive in this file will be set to serve sites only on port &lt;code&gt;8080&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Disable the default virtual host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2dissite 000-default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create a new virtual host file, using the existing default site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/001-default.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now open the new configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/apache2/sites-available/001-default.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the listening port to &lt;code&gt;8080&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;/etc/apache2/sites-available/000-default.conf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;VirtualHost *:8080&amp;gt;
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
&amp;lt;/VirtualHost&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and activate the new configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2ensite 001-default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reload Apache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload apache2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that Apache is now listening on &lt;code&gt;8080&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo netstat -tlpn

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should look like the following example, with &lt;code&gt;apache2&lt;/code&gt; listening on &lt;code&gt;8080&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OutputActive Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1086/sshd
tcp6 0 0 :::8080 :::* LISTEN 4678/apache2
tcp6 0 0 :::22 :::* LISTEN 1086/sshd

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you verify that Apache is listening on the correct port, you can configure support for PHP and FastCGI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 — Configuring Apache to Use mod_fastcgi
&lt;/h2&gt;

&lt;p&gt;Apache serves PHP pages using &lt;code&gt;mod_php&lt;/code&gt; by default, but it requires additional configuration to work with PHP-FPM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are trying this tutorial on an existing installation of LAMP with mod_php, disable it first with &lt;code&gt;sudo a2dismod php7.2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We will be adding a configuration block for &lt;code&gt;mod_fastcgi&lt;/code&gt; which depends on &lt;code&gt;mod_action&lt;/code&gt;. &lt;code&gt;mod_action&lt;/code&gt; is disabled by default, so we first need to enable it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2enmod actions

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rename the existing FastCGI configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv /etc/apache2/mods-enabled/fastcgi.conf /etc/apache2/mods-enabled/fastcgi.conf.default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/apache2/mods-enabled/fastcgi.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following directives to the file to pass requests for &lt;code&gt;.php&lt;/code&gt; files to the PHP-FPM UNIX socket:&lt;/p&gt;

&lt;p&gt;/etc/apache2/mods-enabled/fastcgi.conf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;IfModule mod_fastcgi.c&amp;gt;
  AddHandler fastcgi-script .fcgi
  FastCgiIpcDir /var/lib/apache2/fastcgi
  AddType application/x-httpd-fastphp .php
  Action application/x-httpd-fastphp /php-fcgi
  Alias /php-fcgi /usr/lib/cgi-bin/php-fcgi
  FastCgiExternalServer /usr/lib/cgi-bin/php-fcgi -socket /run/php/php7.2-fpm.sock -pass-header Authorization
  &amp;lt;Directory /usr/lib/cgi-bin&amp;gt;
    Require all granted
  &amp;lt;/Directory&amp;gt;
&amp;lt;/IfModule&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the changes and do a configuration test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apachectl -t

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reload Apache if &lt;strong&gt;Syntax OK&lt;/strong&gt; is displayed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload apache2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see the warning &lt;code&gt;Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message.&lt;/code&gt;, you can safely ignore it for now. We'll configure server names later.&lt;/p&gt;

&lt;p&gt;Now let's make sure we can serve PHP from Apache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 — Verifying PHP Functionality
&lt;/h2&gt;

&lt;p&gt;Let's make sure that PHP works by creating a &lt;code&gt;phpinfo()&lt;/code&gt; file and accessing it from a web browser.&lt;/p&gt;

&lt;p&gt;Create the file &lt;code&gt;/var/www/html/info.php&lt;/code&gt; which contains a call to the &lt;code&gt;phpinfo&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "&amp;lt;?php phpinfo(); ?&amp;gt;" | sudo tee /var/www/html/info.php

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see the file in a browser, go to &lt;code&gt;http://your_server_ip:8080/info.php&lt;/code&gt;. This will give you a list of the configuration settings PHP is using. You'll see output similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2FqQcGNe8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2FqQcGNe8.png" alt="phpinfo Server API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2FeBuDnVU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2FeBuDnVU.png" alt="phpinfo PHP Variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the top of the page, check that &lt;strong&gt;Server API&lt;/strong&gt; says &lt;strong&gt;FPM/FastCGI&lt;/strong&gt;. About two-thirds of the way down the page, the &lt;strong&gt;PHP Variables&lt;/strong&gt; section will tell you the &lt;strong&gt;SERVER_SOFTWARE&lt;/strong&gt; is Apache on Ubuntu. These confirm that &lt;code&gt;mod_fastcgi&lt;/code&gt; is active and Apache is using PHP-FPM to process PHP files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5 — Creating Virtual Hosts for Apache
&lt;/h2&gt;

&lt;p&gt;Let's create Apache virtual host files for the domains &lt;code&gt;foobar.net&lt;/code&gt; and &lt;code&gt;test.io&lt;/code&gt;. To do that, we'll first create document root directories for both sites and place some default files in those directories so we can easily test our configuration.&lt;/p&gt;

&lt;p&gt;First, create the document root directories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -v /var/www/foobar.net /var/www/test.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create an &lt;code&gt;index&lt;/code&gt; file for each site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
echo "&amp;lt;h1 style='color: green;'&amp;gt;Foo Bar&amp;lt;/h1&amp;gt;" | sudo tee /var/www/foobar.net/index.html

echo "&amp;lt;h1 style='color: red;'&amp;gt;Test IO&amp;lt;/h1&amp;gt;" | sudo tee /var/www/test.io/index.html

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create a &lt;code&gt;phpinfo()&lt;/code&gt; file for each site so we can test that PHP is configured properly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
echo "&amp;lt;?php phpinfo(); ?&amp;gt;" | sudo tee /var/www/foobar.net/info.php

echo "&amp;lt;?php phpinfo(); ?&amp;gt;" | sudo tee /var/www/test.io/info.php

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now create the virtual host file for the &lt;code&gt;foobar.net&lt;/code&gt; domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/apache2/sites-available/foobar.net.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following code to the file to define the host:&lt;/p&gt;

&lt;p&gt;/etc/apache2/sites-available/foobar.net.conf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    &amp;lt;VirtualHost *:8080&amp;gt;
        ServerName foobar.net
        ServerAlias www.foobar.net
        DocumentRoot /var/www/foobar.net
        &amp;lt;Directory /var/www/foobar.net&amp;gt;
            AllowOverride All
        &amp;lt;/Directory&amp;gt;
    &amp;lt;/VirtualHost&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The line &lt;code&gt;AllowOverride All&lt;/code&gt; enables &lt;code&gt;.htaccess&lt;/code&gt; support.&lt;/p&gt;

&lt;p&gt;These are only the most basic directives. For a complete guide on setting up virtual hosts in Apache, see &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-16-04?utm_source=devto&amp;amp;utm_medium=display&amp;amp;utm_campaign=Devto_2018_Brand" rel="noopener noreferrer"&gt;How To Set Up Apache Virtual Hosts on Ubuntu 16.04&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Save and close the file. Then create a similar configuration for &lt;code&gt;test.io&lt;/code&gt;. First create the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/apache2/sites-available/test.io.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add the configuration to the file:&lt;/p&gt;

&lt;p&gt;/etc/apache2/sites-available/test.io.conf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    &amp;lt;VirtualHost *:8080&amp;gt;
        ServerName test.io
        ServerAlias www.test.io
        DocumentRoot /var/www/test.io
        &amp;lt;Directory /var/www/test.io&amp;gt;
            AllowOverride All
        &amp;lt;/Directory&amp;gt;
    &amp;lt;/VirtualHost&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and exit the editor.&lt;/p&gt;

&lt;p&gt;Now that both Apache virtual hosts are set up, enable the sites using the &lt;code&gt;a2ensite&lt;/code&gt; command. This creates a symbolic link to the virtual host file in the &lt;code&gt;sites-enabled&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo a2ensite foobar.net

sudo a2ensite test.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check Apache for configuration errors again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apachectl -t

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see &lt;strong&gt;Syntax OK&lt;/strong&gt; displayed if there are no errors. If you see anything else, review the configuration and try again.&lt;/p&gt;

&lt;p&gt;Reload Apache to apply the changes once your configuration is error-free:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload apache2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm the sites are working, open &lt;code&gt;http://foobar.net:8080&lt;/code&gt; and &lt;code&gt;http://test.io:8080&lt;/code&gt; in your browser and verify that each site displays its &lt;strong&gt;index.html&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;You'll see the following results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2F2y1R8Zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2F2y1R8Zd.png" alt="foobar.net index page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2Fwr1pzEj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2Fwr1pzEj.png" alt="test.io index page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, ensure that PHP is working by accessing the &lt;strong&gt;info.php&lt;/strong&gt; files for each site. Visit &lt;code&gt;http://foobar.net:8080/info.php&lt;/code&gt; and &lt;code&gt;http://test.io:8080/info.php&lt;/code&gt; in your browser.&lt;/p&gt;

&lt;p&gt;You'll see the same PHP configuration spec list on each site as you saw in Step 4.&lt;/p&gt;

&lt;p&gt;We now have two websites hosted on Apache at port &lt;code&gt;8080&lt;/code&gt;. Let's configure Nginx next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6 — Installing and Configuring Nginx
&lt;/h2&gt;

&lt;p&gt;In this step we'll install Nginx and configure the domains &lt;code&gt;example.com&lt;/code&gt; and &lt;code&gt;sample.org&lt;/code&gt; as Nginx's virtual hosts. For a complete guide on setting up virtual hosts in Nginx, see &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-18-04#step-5-%E2%80%93-setting-up-server-blocks-(recommended)" rel="noopener noreferrer"&gt;How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 18.04&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Install Nginx using the package manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then remove the default virtual host's symlink since we won't be using it any more:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm /etc/nginx/sites-enabled/default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll create our own default site later (&lt;code&gt;example.com&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Now we'll create virtual hosts for Nginx using the same procedure we used for Apache. First create document root directories for both the websites:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -v /usr/share/nginx/example.com /usr/share/nginx/sample.org

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll keep the Nginx web sites in &lt;code&gt;/usr/share/nginx&lt;/code&gt;, which is where Nginx wants them by default. You could put them under &lt;code&gt;/var/www/html&lt;/code&gt; with the Apache sites, but this separation may help you associate sites with Nginx.&lt;/p&gt;

&lt;p&gt;As you did with Apache's virtual hosts, create &lt;code&gt;index&lt;/code&gt; and &lt;code&gt;phpinfo()&lt;/code&gt; files for testing after setup is complete:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
echo "&amp;lt;h1 style='color: green;'&amp;gt;Example.com&amp;lt;/h1&amp;gt;" | sudo tee /usr/share/nginx/example.com/index.html

echo "&amp;lt;h1 style='color: red;'&amp;gt;Sample.org&amp;lt;/h1&amp;gt;" | sudo tee /usr/share/nginx/sample.org/index.html

echo "&amp;lt;?php phpinfo(); ?&amp;gt;" | sudo tee /usr/share/nginx/example.com/info.php

echo "&amp;lt;?php phpinfo(); ?&amp;gt;" | sudo tee /usr/share/nginx/sample.org/info.php

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now create a virtual host file for the domain &lt;code&gt;example.com&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/example.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nginx calls &lt;code&gt;server {. . .}&lt;/code&gt; areas of a configuration file &lt;strong&gt;server blocks&lt;/strong&gt;. Create a server block for the primary virtual host, &lt;code&gt;example.com&lt;/code&gt;. The &lt;code&gt;default_server&lt;/code&gt; configuration directive makes this the default virtual host which processes HTTP requests which do not match any other virtual host.&lt;/p&gt;

&lt;p&gt;/etc/nginx/sites-available/example.com&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80 default_server;

    root /usr/share/nginx/example.com;
    index index.php index.html index.htm;

    server_name example.com www.example.com;
    location / {
        try_files $uri $uri/ /index.php;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php7.2-fpm.sock;
        include snippets/fastcgi-php.conf;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file. Now create a virtual host file for Nginx's second domain, &lt;code&gt;sample.org&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano etc/nginx/sites-available/sample.org

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following to the file:&lt;/p&gt;

&lt;p&gt;/etc/nginx/sites-available/sample.org&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    root /usr/share/nginx/sample.org;
    index index.php index.html index.htm;

    server_name sample.org www.sample.org;
    location / {
        try_files $uri $uri/ /index.php;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php7.2-fpm.sock;
        include snippets/fastcgi-php.conf;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;Then enable both sites by creating symbolic links to the &lt;code&gt;sites-enabled&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com

sudo ln -s /etc/nginx/sites-available/sample.org /etc/nginx/sites-enabled/sample.org

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then test the Nginx configuration to ensure there are no configuration issues:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reload Nginx if there are no errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now access the &lt;code&gt;phpinfo()&lt;/code&gt; file of your Nginx virtual hosts in a web browser by visiting &lt;a href="http://example.com/info.php" rel="noopener noreferrer"&gt;http://example.com/info.php&lt;/a&gt; and &lt;a href="http://sample.org/info.php" rel="noopener noreferrer"&gt;http://sample.org/info.php&lt;/a&gt;. Look under the PHP Variables sections again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2F1FZeLUe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2F1FZeLUe.png" alt="Nginx PHP Variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;["SERVER_SOFTWARE"]&lt;/strong&gt; should say &lt;code&gt;nginx&lt;/code&gt;, indicating that the files were directly served by Nginx. &lt;strong&gt;["DOCUMENT_ROOT"]&lt;/strong&gt; should point to the directory you created earlier in this step for each Nginx site.&lt;/p&gt;

&lt;p&gt;At this point, we have installed Nginx and created two virtual hosts. Next we will configure Nginx to proxy requests meant for domains hosted on Apache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7 — Configuring Nginx for Apache's Virtual Hosts
&lt;/h2&gt;

&lt;p&gt;Let's create an additional Nginx virtual host with multiple domain names in the &lt;code&gt;server_name&lt;/code&gt; directives. Requests for these domain names will be proxied to Apache.&lt;/p&gt;

&lt;p&gt;Create a new Nginx virtual host file to forward requests to Apache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/apache

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following code block which specifies the names of both Apache virtual host domains and proxies their requests to Apache. Remember to use the public IP address in &lt;code&gt;proxy_pass&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;/etc/nginx/sites-available/apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name foobar.net www.foobar.net test.io www.test.io;

    location / {
        proxy_pass http://your_server_ip:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and enable this new virtual host by creating a symbolic link:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ln -s /etc/nginx/sites-available/apache /etc/nginx/sites-enabled/apache

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the configuration to ensure there are no errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there are no errors, reload Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the browser and access the URL &lt;code&gt;http://foobar.net/info.php&lt;/code&gt; in your browser. Scroll down to the &lt;strong&gt;PHP Variables&lt;/strong&gt; section and check the values displayed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2F1XQi5kl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2F1XQi5kl.png" alt="phpinfo of Apache via Nginx"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The variables &lt;strong&gt;SERVER_SOFTWARE&lt;/strong&gt; and &lt;strong&gt;DOCUMENT_ROOT&lt;/strong&gt; confirm that this request was handled by Apache. The variables &lt;strong&gt;HTTP_X_REAL_IP&lt;/strong&gt; and &lt;strong&gt;HTTP_X_FORWARDED_FOR&lt;/strong&gt; were added by Nginx and should show the public IP address of the computer you're using to access the URL.&lt;/p&gt;

&lt;p&gt;We have successfully set up Nginx to proxy requests for specific domains to Apache. Next, let's configure Apache to set the &lt;code&gt;REMOTE_ADDR&lt;/code&gt; variable as if it were handling these requests directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8 — Installing and Configuring mod_rpaf
&lt;/h2&gt;

&lt;p&gt;In this step you'll install an Apache module called &lt;code&gt;mod\_rpaf&lt;/code&gt; which rewrites the values of &lt;strong&gt;REMOTE_ADDR&lt;/strong&gt; , &lt;strong&gt;HTTPS&lt;/strong&gt; and &lt;strong&gt;HTTP_PORT&lt;/strong&gt; based on the values provided by a reverse proxy. Without this module, some PHP applications would require code changes to work seamlessly from behind a proxy. This module is present in Ubuntu's repository as &lt;code&gt;libapache2-mod-rpaf&lt;/code&gt; but is outdated and doesn't support certain configuration directives. Instead, we will install it from source.&lt;/p&gt;

&lt;p&gt;Install the packages needed to build the module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install unzip build-essential apache2-dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download the latest stable release from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/gnif/mod_rpaf/archive/stable.zip

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Extract the downloaded file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unzip stable.zip

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change into the new directory containing the files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd mod_rpaf-stable

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compile and install the module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
make

sudo make install

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a file in the &lt;code&gt;mods-available&lt;/code&gt; directory which will load the &lt;code&gt;rpaf&lt;/code&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/apache2/mods-available/rpaf.load

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following code to the file to load the module:&lt;/p&gt;

&lt;p&gt;/etc/apache2/mods-available/rpaf.load&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf.so

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and exit the editor.&lt;/p&gt;

&lt;p&gt;Create another file in this directory called &lt;code&gt;rpaf.conf&lt;/code&gt; which will contain the configuration directives for &lt;code&gt;mod_rpaf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/apache2/mods-available/rpaf.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following code block to configure &lt;code&gt;mod_rpaf&lt;/code&gt;, making sure to specify the IP address of your server:&lt;/p&gt;

&lt;p&gt;/etc/apache2/mods-available/rpaf.conf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    &amp;lt;IfModule mod_rpaf.c&amp;gt;
        RPAF_Enable On
        RPAF_Header X-Real-Ip
        RPAF_ProxyIPs your_server_ip 
        RPAF_SetHostName On
        RPAF_SetHTTPS On
        RPAF_SetPort On
    &amp;lt;/IfModule&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a brief description of each directive. See the &lt;code&gt;mod_rpaf&lt;/code&gt; &lt;a href="https://github.com/gnif/mod_rpaf/blob/stable/README.md#configuration-directives" rel="noopener noreferrer"&gt;README&lt;/a&gt; file for more information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RPAF_Header&lt;/strong&gt; - The header to use for the client's real IP address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPAF_ProxyIPs&lt;/strong&gt; - The proxy IP to adjust HTTP requests for.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPAF_SetHostName&lt;/strong&gt; - Updates the vhost name so &lt;code&gt;ServerName&lt;/code&gt; and &lt;code&gt;ServerAlias&lt;/code&gt; work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPAF_SetHTTPS&lt;/strong&gt; - Sets the &lt;code&gt;HTTPS&lt;/code&gt; environment variable based on the value contained in &lt;code&gt;X-Forwarded-Proto&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPAF_SetPort&lt;/strong&gt; - Sets the &lt;code&gt;SERVER_PORT&lt;/code&gt; environment variable. Useful for when Apache is behind a SSL proxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Save &lt;code&gt;rpaf.conf&lt;/code&gt; and enable the module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2enmod rpaf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates symbolic links of the files &lt;code&gt;rpaf.load&lt;/code&gt; and &lt;code&gt;rpaf.conf&lt;/code&gt; in the &lt;code&gt;mods-enabled&lt;/code&gt; directory. Now do a configuration test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apachectl -t

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reload Apache if there are no errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload apache2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access the &lt;code&gt;phpinfo()&lt;/code&gt; pages &lt;code&gt;http://foobar.net/info.php&lt;/code&gt; and &lt;code&gt;http://test.io/info.php&lt;/code&gt; in your browser and check the &lt;strong&gt;PHP Variables&lt;/strong&gt; section. The &lt;strong&gt;REMOTE_ADDR&lt;/strong&gt; variable will now also be that of your local computer's public IP address.&lt;/p&gt;

&lt;p&gt;Now let's set up TLS/SSL encryption for each site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9 — Setting Up HTTPS Websites with Let's Encrypt (Optional)
&lt;/h2&gt;

&lt;p&gt;In this step we will configure TLS/SSL certificates for both the domains hosted on Apache. We'll obtain the certificates through [Let's Encrypt](&lt;a href="https://letsencrypt.org" rel="noopener noreferrer"&gt;https://letsencrypt.org&lt;/a&gt;]. Nginx supports SSL termination so we can set up SSL without modifying Apache's configuration files. The &lt;code&gt;mod_rpaf&lt;/code&gt; module ensures the required environment variables are set on Apache to make applications work seamlessly behind a SSL reverse proxy.&lt;/p&gt;

&lt;p&gt;First we will separate the &lt;code&gt;server {...}&lt;/code&gt; blocks of both the domains so that each of them can have their own SSL certificates. Open the file &lt;code&gt;/etc/nginx/sites-available/apache&lt;/code&gt; in your editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/apache

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify the file so that it looks like this, with &lt;code&gt;foobar.net&lt;/code&gt; and &lt;code&gt;test.io&lt;/code&gt; in their own &lt;code&gt;server&lt;/code&gt; blocks:&lt;/p&gt;

&lt;p&gt;/etc/nginx/sites-available/apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    server {
        listen 80;
        server_name foobar.net www.foobar.net;

        location / {
            proxy_pass http://your_server_ip:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
    server {
        listen 80;
        server_name test.io www.test.io;

        location / {
            proxy_pass http://your_server_ip:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll use &lt;a href="https://certbot.eff.org" rel="noopener noreferrer"&gt;Certbot&lt;/a&gt; to generate our TLS/SSL certificates. Its Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary.&lt;/p&gt;

&lt;p&gt;First, add the official Certbot repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo add-apt-repository ppa:certbot/certbot

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Press &lt;code&gt;ENTER&lt;/code&gt; when prompted to confirm you want to add the new repository. Then update the package list to pick up the new repository's package information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then install Certbot's Nginx package with &lt;code&gt;apt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install python-certbot-nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it's installed, use the &lt;code&gt;certbot&lt;/code&gt; command to generate the certificates for &lt;code&gt;foobar.net&lt;/code&gt; and &lt;code&gt;www.foobar.net&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo certbot --nginx -d foobar.net -d www.foobar.net

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command tells Certbot to use the &lt;code&gt;nginx&lt;/code&gt; plugin, using &lt;code&gt;-d&lt;/code&gt; to specify the names we'd like the certificate to be valid for.&lt;/p&gt;

&lt;p&gt;If this is your first time running &lt;code&gt;certbot&lt;/code&gt;, you will be prompted to enter an email address and agree to the terms of service. After doing so, &lt;code&gt;certbot&lt;/code&gt; will communicate with the Let's Encrypt server, then run a challenge to verify that you control the domain you're requesting a certificate for.&lt;/p&gt;

&lt;p&gt;Next, Certbot will ask how you'd like to configure your HTTPS settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OutputPlease choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select your choice, then press &lt;code&gt;ENTER&lt;/code&gt;. The configuration will be updated, and Nginx will reload to pick up the new settings.&lt;/p&gt;

&lt;p&gt;Now execute the command for the second domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo certbot --nginx -d test.io -d www.test.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access one of Apache's domains in your browser using the &lt;code&gt;https://&lt;/code&gt; prefix; visit &lt;code&gt;https://foobar.net/info.php&lt;/code&gt; and you'll see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2FKK6AmWV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fnginx_apache_ubuntu_1804%2FKK6AmWV.png" alt="phpinfo ssl"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Look in the &lt;strong&gt;PHP Variables&lt;/strong&gt; section. The variable &lt;strong&gt;SERVER_PORT&lt;/strong&gt; has been set to &lt;strong&gt;443&lt;/strong&gt; and &lt;strong&gt;HTTPS&lt;/strong&gt; set to &lt;strong&gt;on&lt;/strong&gt; , as though Apache was directly accessed over HTTPS. With these variables set, PHP applications do not have to be specially configured to work behind a reverse proxy.&lt;/p&gt;

&lt;p&gt;Now let's disable direct access to Apache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 10 — Blocking Direct Access to Apache (Optional)
&lt;/h2&gt;

&lt;p&gt;Since Apache is listening on port &lt;code&gt;8080&lt;/code&gt; on the public IP address, it is accessible by everyone. It can be blocked by working the following IPtables command into your firewall rule set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo iptables -I INPUT -p tcp --dport 8080 ! -s your_server_ip -j REJECT --reject-with tcp-reset

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to use your server's IP address in place of the example in red. Once port &lt;code&gt;8080&lt;/code&gt; is blocked in your firewall, test that Apache is unreachable on it. Open your web browser and try accessing one of Apache's domain names on port &lt;code&gt;8080&lt;/code&gt;. For example: &lt;a href="http://example.com:8080" rel="noopener noreferrer"&gt;http://example.com:8080&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The browser should display an "Unable to connect" or "Webpage is not available" error message. With the IPtables &lt;code&gt;tcp-reset&lt;/code&gt; option in place, an outsider would see no difference between port &lt;code&gt;8080&lt;/code&gt; and a port that doesn't have any service on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; IPtables rules do not survive a system reboot by default. There are multiple ways to preserve IPtables rules, but the easiest is to use &lt;code&gt;iptables-persistent&lt;/code&gt; in Ubuntu's repository. Explore &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-iptables-on-ubuntu-14-04" rel="noopener noreferrer"&gt;this article&lt;/a&gt; to learn more about how to configure IPTables.&lt;/p&gt;

&lt;p&gt;Now let's configure Nginx to serve static files for the Apache sites.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 11 — Serving Static Files Using Nginx (Optional)
&lt;/h2&gt;

&lt;p&gt;When Nginx proxies requests for Apache's domains, it sends every file request for that domain to Apache. Nginx is faster than Apache in serving static files like images, JavaScript and style sheets. So let's configure Nginx's &lt;code&gt;apache&lt;/code&gt; virtual host file to directly serve static files but send PHP requests on to Apache.&lt;/p&gt;

&lt;p&gt;Open the file &lt;code&gt;/etc/nginx/sites-available/apache&lt;/code&gt; in your editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/apache

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll need to add two additional &lt;code&gt;location&lt;/code&gt; blocks to each server block, as well as modify the existing &lt;code&gt;location&lt;/code&gt; sections. In addition, you'll need to tell Nginx where to find the static files for each site.&lt;/p&gt;

&lt;p&gt;If you've decided not to use SSL and TLS certificates, modify your file so it looks like this:&lt;/p&gt;

&lt;p&gt;/etc/nginx/sites-available/apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name test.io www.test.io;
    root /var/www/test.io;
    index index.php index.htm index.html;

    location / {
        try_files $uri $uri/ /index.php;
    }

    location ~ \.php$ {
        proxy_pass http://your_server_ip:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location ~ /\.ht {
        deny all;
    }
}

server {
    listen 80;
    server_name foobar.net www.foobar.net;
    root /var/www/foobar.net;
    index index.php index.htm index.html;

    location / {
        try_files $uri $uri/ /index.php;
    }

    location ~ \.php$ {
        proxy_pass http://your_ip_address:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location ~ /\.ht {
        deny all;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you also want HTTPS to be available, use the following configuration instead:&lt;/p&gt;

&lt;p&gt;/etc/nginx/sites-available/apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name test.io www.test.io;
    root /var/www/test.io;
    index index.php index.htm index.html;

    location / {
        try_files $uri $uri/ /index.php;
    }

    location ~ \.php$ {
        proxy_pass http://your_server_ip:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location ~ /\.ht {
        deny all;
    }

    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/test.io/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/test.io/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

server {
    listen 80;
    server_name foobar.net www.foobar.net;
    root /var/www/foobar.net;
    index index.php index.htm index.html;

    location / {
        try_files $uri $uri/ /index.php;
    }

    location ~ \.php$ {
        proxy_pass http://your_ip_address:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location ~ /\.ht {
        deny all;
    }

    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/foobar.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/foobar.net/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;try_files&lt;/code&gt; directive makes Nginx look for files in the document root and directly serve them. If the file has a &lt;code&gt;.php&lt;/code&gt; extension, the request is passed to Apache. Even if the file is not found in the document root, the request is passed on to Apache so that application features like permalinks work without problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; The &lt;code&gt;location ~ /\.ht&lt;/code&gt; directive is very important; this prevents Nginx from serving the contents of Apache configuration files like &lt;code&gt;.htaccess&lt;/code&gt; and &lt;code&gt;.htpasswd&lt;/code&gt; which contain sensitive information.&lt;/p&gt;

&lt;p&gt;Save the file and perform a configuration test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reload Nginx if the test succeeds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service nginx reload

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify things are working, you can examine Apache's log files in &lt;code&gt;/var/log/apache2&lt;/code&gt; and see the &lt;code&gt;GET&lt;/code&gt; requests for the &lt;code&gt;info.php&lt;/code&gt; files of &lt;code&gt;test.io&lt;/code&gt; and &lt;code&gt;foobar.net&lt;/code&gt;. Use the &lt;code&gt;tail&lt;/code&gt; command to see the last few lines of the file, and use the &lt;code&gt;-f&lt;/code&gt; switch to watch the file for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tail -f /var/log/apache2/other_vhosts_access.log

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now visit &lt;code&gt;http://test.io/info.php&lt;/code&gt; in your browser and then look at the output from the log. You'll see that Apache is indeed replying:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output test.io:80 your_server_ip - - [01/Jul/2016:18:18:34 -0400] "GET /info.php HTTP/1.0" 200 20414 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then visit the &lt;code&gt;index.html&lt;/code&gt; page for each site and you won't see any log entries from Apache. Nginx is serving them.&lt;/p&gt;

&lt;p&gt;When you're done observing the log file, press &lt;code&gt;CTRL+C&lt;/code&gt; to stop tailing it.&lt;/p&gt;

&lt;p&gt;With this setup, Apache will not be able to restrict access to static files. Access control for static files would need to be configured in Nginx's &lt;code&gt;apache&lt;/code&gt; virtual host file, but that's beyond the scope of this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You now have one Ubuntu server with Nginx serving &lt;code&gt;example.com&lt;/code&gt; and &lt;code&gt;sample.org&lt;/code&gt;, along with Apache serving &lt;code&gt;foobar.net&lt;/code&gt; and &lt;code&gt;test.io&lt;/code&gt;. Though Nginx is acting as a reverse-proxy for Apache, Nginx's proxy service is transparent and connections to Apache's domains appear be served directly from Apache itself. You can use this method to serve secure and static sites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build, test, and deploy something new on DigitalOcean - the all-in-one cloud platform developers and their teams love. Get started with a free $100 account credit for new users: &lt;a href="http://do.co/devto" rel="noopener noreferrer"&gt;do.co/devto&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a rel="license noopener noreferrer" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"&gt;&lt;img alt="Creative Commons License" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.creativecommons.org%2Fl%2Fby-nc-sa%2F4.0%2F88x31.png"&gt;&lt;/a&gt;&lt;br&gt;This work is licensed under a &lt;a rel="license noopener noreferrer" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"&gt;Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>php</category>
      <category>webdev</category>
      <category>nginx</category>
      <category>apache</category>
    </item>
  </channel>
</rss>
