<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: 赵文博</title>
    <description>The latest articles on Forem by 赵文博 (@stringzwb).</description>
    <link>https://forem.com/stringzwb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/stringzwb"/>
    <language>en</language>
    <item>
      <title>It’s been a while since my last post. I finally built my first agent skill.</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Fri, 06 Mar 2026 06:49:10 +0000</pubDate>
      <link>https://forem.com/stringzwb/its-been-a-while-since-my-last-post-i-finally-built-my-first-agent-skill-35em</link>
      <guid>https://forem.com/stringzwb/its-been-a-while-since-my-last-post-i-finally-built-my-first-agent-skill-35em</guid>
      <description>&lt;h1&gt;
  
  
  Requirement Clarifier
&lt;/h1&gt;

&lt;p&gt;This skill is for requirement elicitation and clarification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Identify unclear or missing parts in the user's request.&lt;/li&gt;
&lt;li&gt;Ask focused follow-up questions step by step.&lt;/li&gt;
&lt;li&gt;Produce a complete requirement markdown document.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use
&lt;/h2&gt;

&lt;p&gt;Use this skill when the user:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides only a rough idea without complete details.&lt;/li&gt;
&lt;li&gt;Requests implementation but the scope is still vague.&lt;/li&gt;
&lt;li&gt;Needs a structured requirement document before execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Required Document Structure
&lt;/h2&gt;

&lt;p&gt;The final markdown document MUST contain these sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;Background&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Requirement Details&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Examples&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Terminology&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Edge Conditions&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Notes and Risks&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Workflow
&lt;/h2&gt;

&lt;p&gt;Follow this exact sequence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mode 1: Clarification Mode (Default)
&lt;/h3&gt;

&lt;p&gt;When this skill is triggered, ALWAYS start in clarification mode.&lt;/p&gt;

&lt;p&gt;The first response MUST contain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A concise understanding summary based on the user's raw request.&lt;/li&gt;
&lt;li&gt;A six-section skeleton with placeholders (&lt;code&gt;[To be confirmed]&lt;/code&gt;, &lt;code&gt;[Information missing, need clarification]&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;A numbered follow-up question list.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do not output a full completed requirement document in this mode.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Build Initial Skeleton
&lt;/h3&gt;

&lt;p&gt;Based on the user's original message, create only the six-section skeleton.&lt;/p&gt;

&lt;p&gt;Then ask:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;This is my understanding based on your original request. Please confirm whether it is correct. If something is incorrect, please indicate which parts need to be modified. I will first enter clarification mode instead of generating the final document.&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Clarification Loop (Mandatory)
&lt;/h3&gt;

&lt;p&gt;From the skeleton and the user's replies, identify all ambiguous or uncertain points and ask targeted follow-up questions.&lt;/p&gt;

&lt;p&gt;Rules for questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask only high-value questions that reduce implementation ambiguity.&lt;/li&gt;
&lt;li&gt;Prefer specific choices instead of open-ended questions.&lt;/li&gt;
&lt;li&gt;Group related points, but keep each question easy to answer.&lt;/li&gt;
&lt;li&gt;Each round should normally ask 3–7 questions (unless only 1–2 critical gaps remain).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After each user reply:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the internal draft.&lt;/li&gt;
&lt;li&gt;Re-check for remaining uncertainty.&lt;/li&gt;
&lt;li&gt;Continue asking until all critical ambiguity is removed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Mode 2: Document Output Mode (After Confirmation)
&lt;/h3&gt;

&lt;p&gt;Generate the full requirement markdown document ONLY when one of the following is true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user explicitly asks for the final document (e.g., "generate the final document", "output the complete requirement document").&lt;/li&gt;
&lt;li&gt;The stop criteria are met and the user confirms no further changes are needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stop Criteria
&lt;/h2&gt;

&lt;p&gt;Stop asking questions only when all of the following are true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The scope is explicit.&lt;/li&gt;
&lt;li&gt;Inputs and outputs are explicit.&lt;/li&gt;
&lt;li&gt;Edge or critical conditions are explicit.&lt;/li&gt;
&lt;li&gt;Risks and constraints are explicit.&lt;/li&gt;
&lt;li&gt;No section contains unresolved placeholder tags.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Output Rules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The default output is clarification Q&amp;amp;A, not the final document.&lt;/li&gt;
&lt;li&gt;During clarification mode, output: &lt;code&gt;Current Understanding Summary&lt;/code&gt; + &lt;code&gt;Draft Skeleton&lt;/code&gt; + &lt;code&gt;Follow-up Questions&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Keep wording precise and implementation-friendly.&lt;/li&gt;
&lt;li&gt;Do not start coding in this skill; focus only on requirement clarity.&lt;/li&gt;
&lt;li&gt;MUST NOT generate a complete final requirement document in the first response if key information is missing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Output Template
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
markdown
# Requirement Document

## Background
...

## Requirement Details
...

## Examples
...

## Terminology
...

## Edge Conditions
...

## Notes and Risks
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>productivity</category>
      <category>showdev</category>
    </item>
    <item>
      <title>First try with OpenCode (Windows): finished the task while I played a game</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Sun, 22 Feb 2026 16:35:12 +0000</pubDate>
      <link>https://forem.com/stringzwb/first-try-with-opencode-windows-finished-the-task-while-i-played-a-game-5ei8</link>
      <guid>https://forem.com/stringzwb/first-try-with-opencode-windows-finished-the-task-while-i-played-a-game-5ei8</guid>
      <description>&lt;ol&gt;
&lt;li&gt;Installation
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;E:&lt;span class="se"&gt;\\&lt;/span&gt;ai-tools&amp;gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://opencode.ai/install | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgzt293ep854n3uiou4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgzt293ep854n3uiou4d.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set the environment variable as prompted&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My install path is: &lt;code&gt;C:\\Users\\m1501\\.opencode\\bin&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In CMD it becomes:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jr1k6t3m907ao7mobp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jr1k6t3m907ao7mobp5.png" alt=" " width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;/models&lt;/code&gt; to pick a model. I tried using my existing ChatGPT Plus.&lt;/li&gt;
&lt;li&gt;Install the code plugin&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I tried many approaches on Windows, and this one worked.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This plugin upgrades OpenCode into a “multi-AI team”. Installation depends on your subscription.&lt;/p&gt;

&lt;p&gt;First, confirm your subscription (used for configuration):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude Pro/Max: use &lt;code&gt;-claude=yes&lt;/code&gt; or &lt;code&gt;-claude=max20&lt;/code&gt; (max20 is advanced mode).&lt;/li&gt;
&lt;li&gt;ChatGPT Plus: use &lt;code&gt;-chatgpt=yes&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Gemini: use &lt;code&gt;-gemini=yes&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If you don’t have it, use &lt;code&gt;no&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install Bun (required by Oh My if you don’t have it):&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://bun.sh/install | bash
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Restart the terminal.&lt;/p&gt;

&lt;p&gt;One-click install Oh My OpenCode:&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;bunx oh-my-opencode &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-tui&lt;/span&gt; &lt;span class="nt"&gt;--claude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt; &lt;span class="nt"&gt;--chatgpt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt; &lt;span class="nt"&gt;--gemini&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;ul&gt;
&lt;li&gt;Replace the flags based on what you have. For example, only Claude:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;br&gt;
     &lt;code&gt;bash&lt;br&gt;
    bunx oh-my-opencode install --no-tui --claude=yes --chatgpt=no --gemini=no&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If Bun has issues, use &lt;code&gt;npx&lt;/code&gt; (requires Node.js):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;br&gt;
     &lt;code&gt;bash&lt;br&gt;
    npx oh-my-opencode install --no-tui --claude=yes ...&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Verify:&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; ~/.config/opencode/opencode.json | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"oh-my-opencode"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;If you see the plugin name, installation succeeded.&lt;/p&gt;

&lt;p&gt;Complete authentication (if you use paid models):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;opencode auth login&lt;/code&gt;, choose a provider, then complete OAuth login in the browser.&lt;/li&gt;
&lt;li&gt;For Gemini/ChatGPT, you may need an extra plugin (for example &lt;code&gt;opencode-antigravity-auth&lt;/code&gt;). Add it to the &lt;code&gt;plugin&lt;/code&gt; array in &lt;code&gt;opencode.json&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;At this point, my OpenCode configuration was basically done.&lt;/p&gt;

&lt;p&gt;Then I tried running it in IntelliJ IDEA:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z131rcq936ratmfblkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z131rcq936ratmfblkt.png" alt=" " width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wrote my requirements as a document for it. While I played a game of LoL, it finished everything.&lt;/p&gt;

&lt;p&gt;That’s insane.&lt;/p&gt;

&lt;p&gt;References&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://zhuanlan.zhihu.com/p/1994091348333197175#:%7E:text=%E4%B8%8D%E9%9C%80%E8%A6%81%E6%B3%A8%E5%86%8C%E8%B4%A6%E5%8F%B7%EF%BC%8C%E4%B8%8D,%E5%85%B3%E9%94%AE%E7%9A%84%E4%B8%80%E6%9D%A1%E6%98%AF%EF%BC%9A%E5%85%8D%E8%B4%B9%E3%80%82" rel="noopener noreferrer"&gt;https://zhuanlan.zhihu.com/p/1994091348333197175#:~:text=%E4%B8%8D%E9%9C%80%E8%A6%81%E6%B3%A8%E5%86%8C%E8%B4%A6%E5%8F%B7%EF%BC%8C%E4%B8%8D,%E5%85%B3%E9%94%AE%E7%9A%84%E4%B8%80%E6%9D%A1%E6%98%AF%EF%BC%9A%E5%85%8D%E8%B4%B9%E3%80%82&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://opencode.ai/docs/zh-cn/cli/#agent" rel="noopener noreferrer"&gt;https://opencode.ai/docs/zh-cn/cli/#agent&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opencode</category>
      <category>programming</category>
    </item>
    <item>
      <title>Java dominates China. What backend are YOU using?</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Sun, 22 Feb 2026 09:04:40 +0000</pubDate>
      <link>https://forem.com/stringzwb/java-dominates-china-what-backend-are-you-using-1hac</link>
      <guid>https://forem.com/stringzwb/java-dominates-china-what-backend-are-you-using-1hac</guid>
      <description>&lt;p&gt;Hey everyone 👋,&lt;/p&gt;

&lt;p&gt;I’m a developer based in China. Over here, Java (especially Spring Boot) is the absolute king of backend development. It’s the default choice for almost everything, from massive tech giants to traditional enterprises. The talent pool and ecosystem are entirely built around it.&lt;/p&gt;

&lt;p&gt;However, browsing global platforms like this one, I notice the conversation is completely different. I see a huge mix of Node.js, Python, Go, Rust, and C#, while Java doesn't seem to get as much of the spotlight among indie hackers and newer startups.&lt;/p&gt;

&lt;p&gt;This stark contrast makes me really curious about the global landscape! So I'd love to ask:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;What is your primary backend language/framework right now?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why did you choose it over others?&lt;/strong&gt; (Dev speed, raw performance, ecosystem, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What kind of environment are you building for?&lt;/strong&gt; (Indie project, early-stage startup, or large enterprise?)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Looking forward to hearing about your stacks. Let's discuss! 🚀&lt;/p&gt;

</description>
      <category>java</category>
      <category>discuss</category>
      <category>programming</category>
      <category>backend</category>
    </item>
    <item>
      <title>Mastering CAP &amp; BASE Theory with Gemini: From Distributed Principles to Nacos &amp; Redis Reality</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Fri, 20 Feb 2026 16:12:01 +0000</pubDate>
      <link>https://forem.com/stringzwb/mastering-cap-base-theory-with-gemini-from-distributed-principles-to-nacos-redis-reality-ep8</link>
      <guid>https://forem.com/stringzwb/mastering-cap-base-theory-with-gemini-from-distributed-principles-to-nacos-redis-reality-ep8</guid>
      <description>&lt;h1&gt;
  
  
  Core concepts
&lt;/h1&gt;

&lt;p&gt;The &lt;strong&gt;CAP theorem&lt;/strong&gt; (also known as Brewer’s Theorem) is a cornerstone for understanding distributed system design. It states that a distributed system cannot perfectly guarantee all three of the following properties at the same time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Consistency (C)&lt;/strong&gt;: All nodes see the same data at the same time. For example, checking inventory at any branch returns exactly the same result.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Availability (A)&lt;/strong&gt;: Every request receives a response (success or failure), meaning the system is always “online”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition Tolerance (P)&lt;/strong&gt;: The system continues to operate even when network failures split nodes into isolated groups (a partition).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In real networks, &lt;strong&gt;partitions (P)&lt;/strong&gt; are inevitable, so a distributed system typically must trade off between &lt;strong&gt;CP&lt;/strong&gt; and &lt;strong&gt;AP&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbcsdejwyfnudl7s8yh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbcsdejwyfnudl7s8yh6.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  CP mode
&lt;/h1&gt;

&lt;p&gt;In a &lt;strong&gt;CP&lt;/strong&gt; system, if the network fails, the system chooses to &lt;strong&gt;stop serving requests&lt;/strong&gt; in order to keep data &lt;strong&gt;strictly consistent&lt;/strong&gt; across nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idea&lt;/strong&gt;: It is better to return no result than to return incorrect or stale data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: &lt;strong&gt;Bank transfers&lt;/strong&gt;. If two servers are disconnected, the system must lock the account to prevent withdrawing money in two places and corrupting the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt;: The system becomes unavailable during the fault.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  AP mode
&lt;/h1&gt;

&lt;p&gt;In an &lt;strong&gt;AP&lt;/strong&gt; system, even if the network is partitioned, the system still prioritizes &lt;strong&gt;responding to requests&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idea&lt;/strong&gt;: Data might not be the latest, or different users might see different results, but users can still use the service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: &lt;strong&gt;Social media likes&lt;/strong&gt;. If you like a photo during a network partition, your friend might see it a few seconds later. That is acceptable. What matters is that the service does not become unusable because of network instability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt;: Sacrifices immediate consistency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Case study: Nacos CP vs AP
&lt;/h1&gt;

&lt;h3&gt;
  
  
  1. Ephemeral instances vs persistent instances
&lt;/h3&gt;

&lt;p&gt;This is the key logic behind how Nacos differentiates AP and CP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AP mode (default)&lt;/strong&gt;: Used for &lt;strong&gt;ephemeral instances&lt;/strong&gt; (Ephemeral Nodes). After registration, instances keep a heartbeat with the server. During a partition, Nacos prioritizes service availability, and short-term inconsistency is acceptable. This uses Nacos’s &lt;strong&gt;Distro protocol&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CP mode&lt;/strong&gt;: Used for &lt;strong&gt;persistent instances&lt;/strong&gt; (Persistent Nodes). Instance metadata is persisted to disk and requires strong consistency across nodes. If consensus cannot be reached due to a network failure, the system sacrifices availability. This uses a consensus protocol based on the &lt;strong&gt;Raft algorithm&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Why does Nacos support both?
&lt;/h3&gt;

&lt;p&gt;This maps back to the trade-off question:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service discovery&lt;/strong&gt; usually leans toward &lt;strong&gt;AP&lt;/strong&gt;. If network jitter makes the registry unavailable, all microservices may fail. That impact is too large. Small delays can often be masked by client retries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration management&lt;/strong&gt; can lean toward &lt;strong&gt;CP&lt;/strong&gt;. If a critical database password or rate-limit setting changes, it is often desirable for all nodes to immediately and consistently receive the exact latest value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb44zym6txa0pyva1jfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb44zym6txa0pyva1jfd.png" alt=" " width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  BASE theory
&lt;/h1&gt;

&lt;p&gt;Once you understand the CAP trade-off between &lt;strong&gt;Consistency (C)&lt;/strong&gt; and &lt;strong&gt;Availability (A)&lt;/strong&gt;, &lt;strong&gt;BASE theory&lt;/strong&gt; can be viewed as a practical compromise for distributed systems.&lt;/p&gt;

&lt;p&gt;The core idea is: since strong consistency is hard to achieve, we accept a more flexible approach so the system remains usable most of the time.&lt;/p&gt;

&lt;p&gt;BASE is an acronym for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Basically Available (BA)&lt;/strong&gt;: During failures, the system may lose some availability, but should not completely crash. For example, a page that normally loads in 0.1 seconds might take 2 seconds, or some non-core functionality may be temporarily disabled to protect core services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft State (S)&lt;/strong&gt;: The system’s data is allowed to be in an intermediate state. Replication between nodes may be delayed, and this is considered acceptable for overall availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eventually Consistent (E)&lt;/strong&gt;: The most important point. The system does not require data to be consistent at all times, but it guarantees that after some time, all replicas will converge to the same final state.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Case study: Redis Cluster
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Redis Cluster&lt;/strong&gt; (cluster mode) is generally designed to be &lt;strong&gt;AP (Availability + Partition Tolerance)&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  BASE in practice in Redis Cluster
&lt;/h3&gt;

&lt;p&gt;Redis Cluster does not pursue strong consistency. Instead, it achieves &lt;strong&gt;eventual consistency&lt;/strong&gt; via:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Basically Available (BA)&lt;/strong&gt;: Redis Cluster splits data into 16,384 hash slots. Even if a small number of nodes go down, the cluster can continue serving as long as most slots remain covered.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft State (S)&lt;/strong&gt;: After a master writes data, it returns success to the client immediately, then replicates to slaves &lt;strong&gt;asynchronously&lt;/strong&gt;. This implies the master and slaves can be inconsistent at any given moment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eventually Consistent (E)&lt;/strong&gt;: Under normal conditions, slaves catch up with the master within milliseconds.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Why Redis Cluster is not strongly consistent
&lt;/h3&gt;

&lt;p&gt;Consider this scenario:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1&lt;/strong&gt;: You write &lt;code&gt;set key1 value1&lt;/code&gt; to master node A.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 2&lt;/strong&gt;: Node A writes to memory and immediately replies “OK”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 3&lt;/strong&gt;: Before A replicates the data to slave A1, A suddenly loses power and goes down.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 4&lt;/strong&gt;: The cluster promotes slave A1 to become the new master.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt;: The &lt;code&gt;value1&lt;/code&gt; you just wrote is lost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyhang8dsh4twrjc845k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyhang8dsh4twrjc845k.png" alt=" " width="800" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>computerscience</category>
      <category>distributedsystems</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Build a Web Product from 0 to 1 (2): Documentation and Tech Stack Choices</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Mon, 16 Feb 2026 13:03:27 +0000</pubDate>
      <link>https://forem.com/stringzwb/build-a-web-product-from-0-to-1-2-documentation-and-tech-stack-choices-1hk8</link>
      <guid>https://forem.com/stringzwb/build-a-web-product-from-0-to-1-2-documentation-and-tech-stack-choices-1hk8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Last time, we talked about how to clarify requirements and turn a vague idea into something concrete. That is the first step.&lt;br&gt;
But as someone asked in the comments: &lt;strong&gt;“Ideas and execution are two different things. What if things change once you actually build it?”&lt;/strong&gt;&lt;br&gt;
That is true. In real execution, you will always spot logical gaps, or suddenly come up with a better idea. So requirement refinement is always &lt;strong&gt;work in progress&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  1. From “minimal” to “richer”: how requirements become more complex
&lt;/h3&gt;

&lt;p&gt;In the very first version, I only thought about the simplest model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Project → Requirements → Progress&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I felt that was enough. If it runs, it is fine. But once I started coding, I realized it was far from enough.&lt;/p&gt;

&lt;p&gt;Since this is my own knowledge base / management system, why not collect other scattered needs as well? For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Link and document collection&lt;/strong&gt;: where do useful references go?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bug list&lt;/strong&gt;: should I step into the same pit again next time?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tech stack tracking&lt;/strong&gt;: what versions does the project use? I need a “ledger”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, the final feature set became much larger than the initial draft.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfbb7yqcqve6ttfptzse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfbb7yqcqve6ttfptzse.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Documentation: dealing with constant changes
&lt;/h3&gt;

&lt;p&gt;Since requirements keep changing (which is normal for personal projects), &lt;strong&gt;documentation is necessary&lt;/strong&gt; to avoid chaos.&lt;/p&gt;

&lt;p&gt;At the moment, I use two main ways to record requirements: one high-level, one detailed.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.1 Spreadsheet (a backlog, like a Notion requirement pool)
&lt;/h4&gt;

&lt;p&gt;I keep a spreadsheet and put every requirement into a list. This lets me see, at a glance, how much work is still pending.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;Priority&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Link collection&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;To do&lt;/td&gt;
&lt;td&gt;Collect and manage project-related reference links and documents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bug tracking&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;To do&lt;/td&gt;
&lt;td&gt;Record and track issues found during development&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Project management&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;To do&lt;/td&gt;
&lt;td&gt;Manage project basics, progress, and milestones&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tech stack management&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;To do&lt;/td&gt;
&lt;td&gt;Record and maintain the technologies and versions used by the project&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Even for a personal project, I strongly recommend keeping the status up to date. Seeing “To do” become “Done” is one of the best sources of motivation.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.2 Feature details (my “personal PRD”)
&lt;/h4&gt;

&lt;p&gt;This is basically what product managers call a PRD. But since I am the product manager for myself, I do not need to make it overly formal.&lt;/p&gt;

&lt;p&gt;I usually follow an outline like this. In practice, &lt;strong&gt;I merge requirement notes and development notes into one document&lt;/strong&gt; to keep things simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Background&lt;/strong&gt;: Why do I want to build this? What problem does it solve?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;References&lt;/strong&gt;: How do others do it? (Screenshots + feature notes.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key points&lt;/strong&gt;: What should the feature look like?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Roles and permissions&lt;/strong&gt;: Who can view and who can edit?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database entity design&lt;/strong&gt;: &lt;em&gt;This is development-oriented, but thinking about schema early saves a lot of detours later.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The biggest benefit of documentation is that it prevents forgetting and reduces mess.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example, while designing the “project management” module, I suddenly realized a team lead permission needed adjustment. If I only keep it in my head, I will forget it in a few days. But if I record it as a change in the feature details, it stays clear when I implement it.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Tech stack choices: not the most expensive, but the most practical
&lt;/h3&gt;

&lt;p&gt;Honestly, I am not an experienced architect. Since this is my first serious small project, my principles are simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Use what I already know well&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Make it run first&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.1 Core framework
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend + backend&lt;/strong&gt;: &lt;strong&gt;Vue + Java (Spring Boot)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt;: the project is small, so I am not chasing microservices or distributed systems. A simple &lt;strong&gt;monolith&lt;/strong&gt; is enough, and deployment is easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.2 The storage dilemma (middleware)
&lt;/h4&gt;

&lt;p&gt;For file storage, I spent some time thinking about options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MinIO&lt;/strong&gt;: self-hosted, data stays under my control. But I need to manage capacity and operations, and migration later can be painful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud OSS&lt;/strong&gt;: very convenient, but it costs money. Also, there are many stories about unexpected billing or account abuse, which feels risky.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My conclusion: &lt;strong&gt;tech choices are not permanent&lt;/strong&gt;. For now I will stay flexible and decide based on the actual deployment environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.3 Database
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MySQL&lt;/strong&gt;: PostgreSQL is popular lately, and many people are switching. But for me, MySQL is the most familiar and the least risky.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence layer&lt;/strong&gt;: I chose &lt;strong&gt;MyBatis-Plus&lt;/strong&gt; for speed and convenience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.4 Embracing AI
&lt;/h4&gt;

&lt;p&gt;Since this is a new project, I want to try something new. I have already integrated &lt;strong&gt;Spring AI&lt;/strong&gt; and combined it with the &lt;strong&gt;GLM-4.5 model&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The early test results look promising. Next, I plan to build some AI-assisted features.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Final thoughts: why “reinvent the wheel”?
&lt;/h3&gt;

&lt;p&gt;If I used an off-the-shelf backend framework (for example, RuoYi), development would probably be twice as fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But I do not want to do that.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I want my first project to be something I built entirely by myself. That includes basic but tedious parts like login, registration, and permission/authentication. I did not rely on a ready-made framework for those (although I did read a lot and tried to implement reasonable security protections).&lt;/p&gt;

&lt;p&gt;For a beginner’s first project:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Your framework does not need to be advanced, complex, or even the newest.&lt;/strong&gt;&lt;br&gt;
As long as it supports shipping the product, lets you run through the whole process, and helps you learn, it is a good tech stack choice.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>devjournal</category>
      <category>sideprojects</category>
      <category>softwaredevelopment</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Building My Own Web Product from 0 to 1 (1): Defining Requirements &amp; Feature Analysis</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Sat, 14 Feb 2026 14:50:36 +0000</pubDate>
      <link>https://forem.com/stringzwb/building-my-own-web-product-from-0-to-1-1-defining-requirements-feature-analysis-51n4</link>
      <guid>https://forem.com/stringzwb/building-my-own-web-product-from-0-to-1-1-defining-requirements-feature-analysis-51n4</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; I built and deployed a small website as an experiment. The key lesson was learning how to turn a vague idea into clear requirements, and then breaking features down into a simple, structured workflow.&lt;/p&gt;




&lt;p&gt;Hi everyone. This is the first time I’m sharing some experience from when I deployed my own website independently.&lt;/p&gt;

&lt;p&gt;Although my site has already been deployed online and officially registered, the features are still not complete, and it can only be accessed within China. So I cannot share a public link for you to visit yet.&lt;/p&gt;

&lt;p&gt;Still, I can share my overall thinking, and I hope to learn and exchange ideas with more experienced builders.&lt;/p&gt;

&lt;p&gt;Today’s topic is &lt;strong&gt;requirements analysis&lt;/strong&gt;. When you have an idea, the most important step is turning that idea into &lt;strong&gt;concrete requirements&lt;/strong&gt;. That’s not easy, which is why my first website was also an experimental project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F318vcpiwankwf1t54sjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F318vcpiwankwf1t54sjy.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting from an idea: how can a startup or small team manage projects?
&lt;/h2&gt;

&lt;p&gt;My first thought came from my interest in online note-taking and project management tools.&lt;/p&gt;

&lt;p&gt;In my previous company, large teams often had their own project management systems. But those systems usually required filling in complicated forms and going through complex processes, which made them slow and inefficient.&lt;/p&gt;

&lt;p&gt;Smaller companies often use tools like &lt;strong&gt;ZenTao&lt;/strong&gt;, &lt;strong&gt;Jira&lt;/strong&gt;, or &lt;strong&gt;Feishu&lt;/strong&gt;. These tools are very general-purpose, but many features and workflows are unnecessary.&lt;/p&gt;

&lt;p&gt;Some startups even rely on static documents or online documents to manage projects, but that approach is less automated and can easily become messy and hard to follow.&lt;/p&gt;

&lt;p&gt;So I came up with the idea of building a &lt;strong&gt;simplified project management tool&lt;/strong&gt;. This became the core feature of the website I designed. Only after having this idea could I start planning how to build the site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Turning ideas into requirements: breaking down features
&lt;/h2&gt;

&lt;p&gt;Turning an idea into requirements means breaking it down and refining it. There are many things to consider.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) If other products are complicated, how can my system be simpler?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Remove unnecessary features.&lt;/li&gt;
&lt;li&gt;Avoid complex configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In ZenTao, building product modules and splitting into many team groups might be unnecessary for small companies.&lt;/li&gt;
&lt;li&gt;In Jira, some configurations are very complex.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In many small companies, users might only care about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who the users are&lt;/li&gt;
&lt;li&gt;What progress each person is making&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2) How can the structure be clearer?
&lt;/h3&gt;

&lt;p&gt;For progress reporting, project management can be structured as a simple model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Project → Requirement → Progress&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A minimal workflow could be:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Admins create projects.&lt;/li&gt;
&lt;li&gt;Project owners or requirement owners create requirements.&lt;/li&gt;
&lt;li&gt;Developers only need to report progress.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After defining the key requirements, you can then design the details, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Entity/data model&lt;/li&gt;
&lt;li&gt;API design&lt;/li&gt;
&lt;li&gt;UI flow&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Considering feasibility and risks
&lt;/h2&gt;

&lt;p&gt;To be honest, I did not do enough in this area. This was just a trial project, but I still made some considerations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feasibility
&lt;/h3&gt;

&lt;p&gt;The target customers were &lt;strong&gt;small businesses and startups&lt;/strong&gt;, and their budgets are likely limited.&lt;/p&gt;

&lt;p&gt;So I did not invest heavily in technology or resources.&lt;/p&gt;

&lt;p&gt;Since small companies typically have fewer users, the performance requirements are also lower, which means I can save a lot of costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Risks
&lt;/h3&gt;

&lt;p&gt;Users might be concerned about &lt;strong&gt;security&lt;/strong&gt;, but I have not thought too much about that yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;In the past, I was purely a technical engineer. This is just a bit of my common sense and experience.&lt;/p&gt;

&lt;p&gt;I’m looking forward to discussing with everyone, and I would love to hear your opinions and suggestions.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Stress: Linux Stress Testing Tool</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Fri, 13 Feb 2026 14:29:28 +0000</pubDate>
      <link>https://forem.com/stringzwb/stress-linux-stress-testing-tool-7ho</link>
      <guid>https://forem.com/stringzwb/stress-linux-stress-testing-tool-7ho</guid>
      <description>&lt;p&gt;💡&lt;/p&gt;

&lt;p&gt;This page summarizes how to install and use &lt;strong&gt;stress&lt;/strong&gt;, plus common parameters and practical examples. It is meant as a quick reference for performance testing and bottleneck investigation.&lt;/p&gt;




&lt;h3&gt;
  
  
  Part 1: Overview
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Basic Info
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tool name&lt;/strong&gt;: &lt;code&gt;stress&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What it is&lt;/strong&gt;: a Linux command-line stress testing tool that simulates high load by spawning worker processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What it can stress&lt;/strong&gt;: CPU, memory (VM), disk I/O, and mixed workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://blog.csdn.net/qq_41978931/article/details/150466333" rel="noopener noreferrer"&gt;CSDN: stress usage guide&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Official docs and resources
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Official documentation&lt;/strong&gt;: (to be added)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub repository&lt;/strong&gt;: (to be added)&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Part 2: How it works
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Key features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CPU stress&lt;/strong&gt;: spawn compute-heavy workers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory stress&lt;/strong&gt;: allocate, touch, and optionally keep memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I/O stress&lt;/strong&gt;: call &lt;code&gt;sync()&lt;/code&gt; to generate disk I/O pressure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mixed stress&lt;/strong&gt;: combine CPU + memory + I/O to approximate real high-load scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typical use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validating system stability under pressure&lt;/li&gt;
&lt;li&gt;Comparing performance before and after tuning&lt;/li&gt;
&lt;li&gt;Finding resource bottlenecks with monitoring tools (&lt;code&gt;top&lt;/code&gt;, &lt;code&gt;iostat&lt;/code&gt;, &lt;code&gt;vmstat&lt;/code&gt;, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Core idea
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;stress&lt;/code&gt; launches multiple worker processes. Each worker type corresponds to a resource dimension:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU worker&lt;/li&gt;
&lt;li&gt;VM (memory) worker&lt;/li&gt;
&lt;li&gt;IO worker&lt;/li&gt;
&lt;li&gt;HDD worker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A key parameter, &lt;code&gt;--vm-stride&lt;/code&gt;, changes the memory write stride, which can affect Copy-On-Write behavior and shift CPU time between &lt;strong&gt;user space (us)&lt;/strong&gt; and &lt;strong&gt;kernel space (sy)&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Part 3: Installation and usage
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Requirements
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Linux (CentOS, Ubuntu, etc.)&lt;/li&gt;
&lt;li&gt;Installation permissions (sudo)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Install on CentOS 7 (EPEL)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;epel-release
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;stress
stress &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install on Ubuntu
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;stress
stress &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Basic syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &amp;lt;options&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Part 4: Common options (with examples)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) CPU stress
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-c, --cpu N&lt;/code&gt;: start &lt;code&gt;N&lt;/code&gt; CPU workers.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--backoff N&lt;/code&gt;: delay new forked processes by &lt;code&gt;N&lt;/code&gt; microseconds before they start.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: start 4 CPU workers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-c&lt;/span&gt; 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Monitoring tip: use &lt;code&gt;top&lt;/code&gt; to observe per-process CPU usage.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Memory stress
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-m, --vm N&lt;/code&gt;: start &lt;code&gt;N&lt;/code&gt; memory workers.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--vm-bytes B&lt;/code&gt;: memory size per worker (for example &lt;code&gt;300M&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: 2 workers, 300MB each&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-m&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 300M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Useful memory parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--vm-keep&lt;/code&gt;: keep memory allocated (instead of allocate/free loops).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 300M &lt;span class="nt"&gt;--vm-keep&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--vm-hang N&lt;/code&gt;: sleep &lt;code&gt;N&lt;/code&gt; seconds after allocation before freeing, then repeat.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 300M &lt;span class="nt"&gt;--vm-hang&lt;/span&gt; 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--vm-stride B&lt;/code&gt;: set memory write stride (can change COW frequency and CPU behavior).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 500M &lt;span class="nt"&gt;--vm-stride&lt;/span&gt; 64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;code&gt;--vm-stride&lt;/code&gt; and CPU &lt;code&gt;us&lt;/code&gt; vs &lt;code&gt;sy&lt;/code&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Small stride (for example 64 bytes)&lt;/strong&gt;: denser writes, more frequent COW, often higher &lt;strong&gt;user time (us)&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 500M &lt;span class="nt"&gt;--vm-stride&lt;/span&gt; 64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large stride (for example 1M)&lt;/strong&gt;: less frequent COW, but may increase kernel memory-management overhead, often higher &lt;strong&gt;system time (sy)&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 500M &lt;span class="nt"&gt;--vm-stride&lt;/span&gt; 1M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default stride&lt;/strong&gt;: roughly 4096 bytes (4KB) if not specified.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 500M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quick reference:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;code&gt;--vm-stride&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;CPU tendency&lt;/th&gt;
&lt;th&gt;Memory behavior&lt;/th&gt;
&lt;th&gt;When to use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Small (e.g., 64)&lt;/td&gt;
&lt;td&gt;Higher &lt;strong&gt;us&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Frequent COW, dense writes&lt;/td&gt;
&lt;td&gt;Simulate compute-heavy memory operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Medium (e.g., 4K)&lt;/td&gt;
&lt;td&gt;Balanced&lt;/td&gt;
&lt;td&gt;Default-ish behavior&lt;/td&gt;
&lt;td&gt;General memory stress testing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Large (e.g., 1M)&lt;/td&gt;
&lt;td&gt;Higher &lt;strong&gt;sy&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Less COW, more memory management overhead&lt;/td&gt;
&lt;td&gt;Simulate kernel memory-management pressure&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;COW (Copy-On-Write)&lt;/strong&gt;: pages are copied only when written.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;us/sy&lt;/strong&gt;: CPU time spent in user space vs kernel space.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3) I/O stress
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i, --io N&lt;/code&gt;: start &lt;code&gt;N&lt;/code&gt; I/O workers. Each calls &lt;code&gt;sync()&lt;/code&gt; to flush buffers to disk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: start 4 I/O workers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-i&lt;/span&gt; 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Monitoring tip (disk I/O):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;iostat &lt;span class="nt"&gt;-x&lt;/span&gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;%util&lt;/code&gt;: device utilization, near 100% means saturated&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;r/s&lt;/code&gt;, &lt;code&gt;w/s&lt;/code&gt;: reads/writes per second&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rkB/s&lt;/code&gt;, &lt;code&gt;wkB/s&lt;/code&gt;: throughput&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;await&lt;/code&gt;: average I/O latency&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--hdd-bytes B&lt;/code&gt;: size of data written by HDD workers.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-i&lt;/span&gt; 1 &lt;span class="nt"&gt;--hdd-bytes&lt;/span&gt; 10M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4) Mixed workload
&lt;/h4&gt;

&lt;p&gt;Example: CPU + memory + I/O + disk write&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;--cpu&lt;/span&gt; 4 &lt;span class="nt"&gt;--io&lt;/span&gt; 4 &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 100M &lt;span class="nt"&gt;--vm-keep&lt;/span&gt; &lt;span class="nt"&gt;--hdd-bytes&lt;/span&gt; 10M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Meaning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 CPU workers&lt;/li&gt;
&lt;li&gt;4 I/O workers&lt;/li&gt;
&lt;li&gt;2 VM workers (100MB each) and keep the memory&lt;/li&gt;
&lt;li&gt;write 10MB of data for disk pressure&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5) Other handy options
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-t, --timeout N&lt;/code&gt;: run for &lt;code&gt;N&lt;/code&gt; seconds
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-c&lt;/span&gt; 4 &lt;span class="nt"&gt;-t&lt;/span&gt; 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-v, --verbose&lt;/code&gt;: verbose output
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-c&lt;/span&gt; 4 &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-q, --quiet&lt;/code&gt;: quiet mode
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-c&lt;/span&gt; 4 &lt;span class="nt"&gt;-q&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-n, --dry-run&lt;/code&gt;: print what would run, without actually stressing
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stress &lt;span class="nt"&gt;-c&lt;/span&gt; 4 &lt;span class="nt"&gt;-n&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Part 5: Monitoring recommendations
&lt;/h3&gt;

&lt;p&gt;During stress tests, it is best to watch system metrics in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;top&lt;/code&gt;: CPU, memory, and per-process usage&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;iostat&lt;/code&gt;: disk throughput and latency&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vmstat&lt;/code&gt;: memory and overall system behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Practical tip: run the stress command and monitoring commands in separate terminals so you can compare load with metric changes.&lt;/p&gt;




&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Key takeaways
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;stress&lt;/code&gt; is a simple, fast way to create CPU, memory, and I/O pressure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--vm-stride&lt;/code&gt; can significantly change memory write patterns and the CPU &lt;code&gt;us/sy&lt;/code&gt; split.&lt;/li&gt;
&lt;li&gt;Stress testing is most valuable when paired with monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Recommendation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rating&lt;/strong&gt;: ⭐⭐⭐⭐☆ (4/5)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why&lt;/strong&gt;: easy to use, parameters are straightforward, good for quickly building pressure scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not ideal when&lt;/strong&gt;: you need realistic end-to-end business traffic and request chains (use a dedicated load-testing platform/tool instead).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.csdn.net/qq_41978931/article/details/150466333" rel="noopener noreferrer"&gt;CSDN article&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cli</category>
      <category>linux</category>
      <category>performance</category>
      <category>testing</category>
    </item>
    <item>
      <title>Docker Feature Overview</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Fri, 13 Feb 2026 14:12:30 +0000</pubDate>
      <link>https://forem.com/stringzwb/docker-feature-overview-4lkn</link>
      <guid>https://forem.com/stringzwb/docker-feature-overview-4lkn</guid>
      <description>&lt;h3&gt;
  
  
  Basic Concepts
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Container
&lt;/h4&gt;

&lt;p&gt;A &lt;strong&gt;container&lt;/strong&gt; is like a lightweight box that bundles everything an application needs to run.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application code&lt;/li&gt;
&lt;li&gt;Runtime (for example Java or Python)&lt;/li&gt;
&lt;li&gt;Dependency libraries&lt;/li&gt;
&lt;li&gt;Basic system tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compared with virtual machines, containers do not carry the overhead of a full guest operating system. They are smaller, start faster, and typically use resources more efficiently.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Image
&lt;/h4&gt;

&lt;p&gt;An &lt;strong&gt;image&lt;/strong&gt; is the template or blueprint used to create containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is essentially a snapshot of a filesystem plus some metadata and configuration.&lt;/li&gt;
&lt;li&gt;Images are built in &lt;strong&gt;layers&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;When you run &lt;code&gt;docker run &amp;lt;image&amp;gt;&lt;/code&gt;, Docker creates a container from that image and adds a writable layer on top.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3) Registry
&lt;/h4&gt;

&lt;p&gt;A &lt;strong&gt;registry&lt;/strong&gt; is where images are stored and distributed. Think of it as an “image warehouse”.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public registries&lt;/strong&gt;: Docker Hub, Aliyun Container Registry, and others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private registries&lt;/strong&gt;: company-hosted registries for internal images.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Common operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull an image: &lt;code&gt;docker pull &amp;lt;registry&amp;gt;/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Push an image: &lt;code&gt;docker push &amp;lt;registry&amp;gt;/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How containers, images, and registries relate
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxs6dl0k3yrckg6pzpan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxs6dl0k3yrckg6pzpan.png" alt=" " width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1) Image management&lt;/span&gt;
docker images                     &lt;span class="c"&gt;# list local images&lt;/span&gt;
docker pull nginx:latest          &lt;span class="c"&gt;# pull an image from a registry&lt;/span&gt;
docker rmi nginx:latest           &lt;span class="c"&gt;# remove a local image&lt;/span&gt;

&lt;span class="c"&gt;# 2) Container management&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; web nginx    &lt;span class="c"&gt;# start an nginx container named "web" in background&lt;/span&gt;
docker ps                         &lt;span class="c"&gt;# list running containers&lt;/span&gt;
docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;                      &lt;span class="c"&gt;# list all containers (including stopped ones)&lt;/span&gt;
docker stop web                   &lt;span class="c"&gt;# stop the "web" container&lt;/span&gt;
docker start web                  &lt;span class="c"&gt;# start a stopped container&lt;/span&gt;
docker &lt;span class="nb"&gt;rm &lt;/span&gt;web                     &lt;span class="c"&gt;# remove a stopped container&lt;/span&gt;

&lt;span class="c"&gt;# 3) Build an image&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; myapp:1.0 &lt;span class="nb"&gt;.&lt;/span&gt;       &lt;span class="c"&gt;# build an image from Dockerfile in current directory&lt;/span&gt;

&lt;span class="c"&gt;# 4) Logs and monitoring&lt;/span&gt;
docker logs web                   &lt;span class="c"&gt;# view container logs&lt;/span&gt;
docker stats web                  &lt;span class="c"&gt;# realtime resource usage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker Networking
&lt;/h3&gt;

&lt;p&gt;Docker creates several default networks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;bridge&lt;/strong&gt;: default network (NAT + virtual bridge)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;host&lt;/strong&gt;: container shares the host network stack&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;none&lt;/strong&gt;: no networking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Common commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List networks&lt;/span&gt;
docker network &lt;span class="nb"&gt;ls&lt;/span&gt;

&lt;span class="c"&gt;# Inspect a network&lt;/span&gt;
docker network inspect bridge

&lt;span class="c"&gt;# Create a custom bridge network&lt;/span&gt;
docker network create &lt;span class="nt"&gt;--driver&lt;/span&gt; bridge my-net

&lt;span class="c"&gt;# Start a container and attach it to the custom network&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; db &lt;span class="nt"&gt;--network&lt;/span&gt; my-net mysql

&lt;span class="c"&gt;# Connect an already-running container to a network&lt;/span&gt;
docker network connect my-net web

&lt;span class="c"&gt;# Disconnect a container from a network&lt;/span&gt;
docker network disconnect my-net web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why custom networks are useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containers on the same user-defined network can reach each other by &lt;strong&gt;container name&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Better isolation between environments and services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Inspecting Container Details
&lt;/h3&gt;

&lt;p&gt;When you need detailed container configuration (port mappings, volumes, environment variables, etc.), use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is JSON. Useful fields include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;HostConfig.PortBindings&lt;/code&gt;: port mappings&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Mounts&lt;/code&gt;: volume mounts&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Config.Env&lt;/code&gt;: environment variables&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NetworkSettings&lt;/code&gt;: networking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you only want specific parts, combine with &lt;code&gt;jq&lt;/code&gt; (or simple &lt;code&gt;grep&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Only port bindings&lt;/span&gt;
docker inspect web | jq &lt;span class="s1"&gt;'.[0].HostConfig.PortBindings'&lt;/span&gt;

&lt;span class="c"&gt;# Only mounts&lt;/span&gt;
docker inspect web | jq &lt;span class="s1"&gt;'.[0].Mounts'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Tips
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Layered images&lt;/strong&gt;: every Dockerfile change usually creates a new layer. Layer caching makes builds faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data persistence&lt;/strong&gt;: in production, mount databases and logs to the host using &lt;strong&gt;volumes&lt;/strong&gt; so data is not lost when containers are recreated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment isolation&lt;/strong&gt;: custom networks + private registries help isolate dev, test, and prod environments more cleanly.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Nginx for Developers</title>
      <dc:creator>赵文博</dc:creator>
      <pubDate>Fri, 13 Feb 2026 13:43:47 +0000</pubDate>
      <link>https://forem.com/stringzwb/nginx-summary-english-nk1</link>
      <guid>https://forem.com/stringzwb/nginx-summary-english-nk1</guid>
      <description>&lt;p&gt;👋&lt;/p&gt;

&lt;p&gt;This is my first blog post on &lt;a href="http://dev.to"&gt;&lt;strong&gt;dev.to&lt;/strong&gt;&lt;/a&gt;. I hope you enjoy it.&lt;/p&gt;

&lt;p&gt;⚡&lt;/p&gt;

&lt;p&gt;Nginx is a high-performance, event-driven, lightweight web server and reverse proxy. Thanks to its asynchronous and non-blocking architecture, it can handle a large number of concurrent connections with very low resource usage. Besides serving static assets efficiently, Nginx can route requests to backend services via &lt;code&gt;proxy_pass&lt;/code&gt; and supports multiple load-balancing algorithms, such as round-robin, least connections, and IP hash. It is also commonly used for SSL termination, caching, separating static and dynamic traffic, and basic security hardening, which makes it a key traffic gateway in modern microservice and front-end/back-end separated architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Configuration
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;http&lt;/code&gt; vs &lt;code&gt;server&lt;/code&gt; vs &lt;code&gt;location&lt;/code&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Context&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;http&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Global HTTP-level settings (timeouts, logs, compression, cache, etc.)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;http { proxy_read_timeout 300s; gzip on; }&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A virtual host (a “site”/service). Binds &lt;code&gt;listen&lt;/code&gt; and &lt;code&gt;server_name&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;server { listen 80; server_name [example.com](http://example.com); … }&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;location&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;URI prefix or regex matching rules. Defines how that kind of request is handled&lt;/td&gt;
&lt;td&gt;&lt;code&gt;location /static/ { root /var/www; }&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nesting structure&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  server {
    location { … }
    location { … }
  }
  server { … }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Responsibilities&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;http&lt;/code&gt;: framework-level defaults, modules, overall behavior&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;server&lt;/code&gt;: split traffic by domain/port&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;location&lt;/code&gt;: serve static files, reverse proxy, URL rewrite, etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  A minimal example (serving a SPA)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  include       mime.types;
  default_type  application/octet-stream;
  sendfile      on;
  keepalive_timeout  65;

  server {
    listen       80;
    server_name  your.domain.com;    # or an IP

    # Point to your built dist directory
    root   /var/www/vue-app/dist;    # Linux path
    # Windows example: root C:/nginx/html/vue-app/dist;

    index  index.html;

    # Try static first; if not found, fall back to index.html (SPA History mode)
    location / {
      try_files $uri $uri/ /index.html;
    }

    # Optional: cache static assets
    location ~* \.(js|css|png|jpg|jpeg|gif|svg|woff2?)$ {
      expires 30d;
      add_header Cache-Control "public";
    }

    # Error page
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
      root /var/www/vue-app/dist;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;code&gt;include mime.types&lt;/code&gt; and what it means
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;include&lt;/code&gt; loads an external file (or a set of files) into the current configuration context.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;include mime.types;&lt;/code&gt; tells Nginx to load the &lt;em&gt;file extension ↔ MIME type&lt;/em&gt; mapping table. That way, when Nginx serves static files like &lt;code&gt;.html&lt;/code&gt;, &lt;code&gt;.css&lt;/code&gt;, &lt;code&gt;.js&lt;/code&gt;, &lt;code&gt;.png&lt;/code&gt;, etc., it can automatically set the correct &lt;code&gt;Content-Type&lt;/code&gt; header so browsers interpret assets correctly.&lt;/p&gt;

&lt;h4&gt;
  
  
  What &lt;code&gt;$uri&lt;/code&gt; is (and why it matters for &lt;code&gt;try_files&lt;/code&gt;)
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;$uri&lt;/code&gt; is a built-in Nginx variable. It comes from the request line’s URI part (without the query string) and is normalized by Nginx.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Browser requests: &lt;code&gt;GET /foo/bar.html?abc=123 HTTP/1.1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Nginx sees &lt;code&gt;$request_uri = /foo/bar.html?abc=123&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Nginx strips the query part and normalizes the path, producing &lt;code&gt;$uri = /foo/bar.html&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In &lt;code&gt;try_files $uri $uri/ /index.html;&lt;/code&gt;, Nginx checks the filesystem for the file or directory first. If nothing matches, it falls back to &lt;code&gt;/index.html&lt;/code&gt;. This is why SPA routes still work on refresh under History mode.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend-oriented settings (static site)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;root&lt;/code&gt; points to the built &lt;code&gt;dist&lt;/code&gt; directory&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;index&lt;/code&gt; is the default entry file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;try_files&lt;/code&gt; enables SPA routing fallback (History mode)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backend-oriented settings (reverse proxy)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;listen&lt;/code&gt; defines the listening port&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;server_name&lt;/code&gt; defines the domain (or host) to match&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;location&lt;/code&gt; defines request matching rules and the upstream routing logic&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Location matching order (common rules)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;=&lt;/code&gt; exact match&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;^~&lt;/code&gt; longest prefix match (and stop searching)&lt;/li&gt;
&lt;li&gt;regex matches (&lt;code&gt;~&lt;/code&gt;, &lt;code&gt;~*&lt;/code&gt;) in the order they appear (first match wins)&lt;/li&gt;
&lt;li&gt;normal prefix match (longest)&lt;/li&gt;
&lt;li&gt;if nothing matches, return 404&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsa437oabkj7fonvbdif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsa437oabkj7fonvbdif.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Reverse proxy request flow (high level)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Client sends a request&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Browser or HTTP client connects to Nginx on the listening port and sends an HTTP request.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Nginx worker accepts the connection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the event-driven model, a worker process &lt;code&gt;accept()&lt;/code&gt;s the connection and parses it into an internal request object.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pick an upstream server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If using &lt;code&gt;upstream&lt;/code&gt;, Nginx selects a backend node based on the chosen algorithm. If &lt;code&gt;proxy_pass&lt;/code&gt; points to a fixed host, it routes to that one.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Connect to upstream&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nginx creates a non-blocking socket and initiates a TCP connection, controlled by &lt;code&gt;proxy_connect_timeout&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Forward the request&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nginx sends the request line, headers, and optional body to the upstream, controlled by &lt;code&gt;proxy_send_timeout&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read the response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nginx reads the response status line, headers, and body from upstream, controlled by &lt;code&gt;proxy_read_timeout&lt;/code&gt;, and streams or buffers it back to the client.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reuse or close connections&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With keepalive enabled, Nginx can reuse upstream connections to reduce handshake overhead.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Reverse Proxy
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Forward Proxy&lt;/th&gt;
&lt;th&gt;Reverse Proxy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Primary target&lt;/td&gt;
&lt;td&gt;Client&lt;/td&gt;
&lt;td&gt;Server (the website/service)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traffic direction&lt;/td&gt;
&lt;td&gt;Client → proxy → any external server&lt;/td&gt;
&lt;td&gt;Client → proxy → internal backend servers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Typical use cases&lt;/td&gt;
&lt;td&gt;Bypass restrictions, filtering, client-side caching&lt;/td&gt;
&lt;td&gt;Load balancing, SSL termination, caching, hiding backend topology&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Client configuration&lt;/td&gt;
&lt;td&gt;Client must configure proxy&lt;/td&gt;
&lt;td&gt;Client does not need to know (transparent)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Forward proxy&lt;/strong&gt;: you explicitly choose a proxy to access external sites.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reverse proxy&lt;/strong&gt;: a proxy sits in front of your service and forwards requests to your backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Load Balancing
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Minimal &lt;code&gt;upstream&lt;/code&gt; (round-robin by default)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    server 192.168.0.101:8080;
    server 192.168.0.102:8080;
    server 192.168.0.103:8080;
  }

  server {
    listen 80;
    server_name example.com;

    location / {
      proxy_pass http://backend;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Algorithm&lt;/strong&gt;: round-robin&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pros&lt;/strong&gt;: simplest setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2) Weighted round-robin
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    server 192.168.0.101:8080 weight=5;
    server 192.168.0.102:8080 weight=3;
    server 192.168.0.103:8080 weight=2;
  }

  server {
    listen 80;

    location / {
      proxy_pass http://backend;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Larger &lt;code&gt;weight&lt;/code&gt; means more traffic.&lt;/li&gt;
&lt;li&gt;Useful when backend nodes have different capacity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3) Least connections
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    least_conn;
    server 192.168.0.101:8080;
    server 192.168.0.102:8080;
    server 192.168.0.103:8080;
  }

  server {
    listen 80;

    location / {
      proxy_pass http://backend;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Routes new requests to the node with the fewest active connections.&lt;/li&gt;
&lt;li&gt;Good when request duration varies a lot.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4) IP hash (session affinity)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    ip_hash;
    server 192.168.0.101:8080;
    server 192.168.0.102:8080;
    server 192.168.0.103:8080;
  }

  server {
    listen 80;

    location / {
      proxy_pass http://backend;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Same client IP tends to hit the same backend.&lt;/li&gt;
&lt;li&gt;Does not support &lt;code&gt;weight&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5) URI hash (consistent hash)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    hash $request_uri consistent;
    server 192.168.0.101:8080;
    server 192.168.0.102:8080;
    server 192.168.0.103:8080;
  }

  server {
    listen 80;

    location / {
      proxy_pass http://backend;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Hashes a variable (like &lt;code&gt;$request_uri&lt;/code&gt;) to pick the node.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;consistent&lt;/code&gt; reduces remapping when nodes change.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6) Basic failure handling
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    server 192.168.0.101:8080 max_fails=3 fail_timeout=30s;
    server 192.168.0.102:8080 max_fails=3 fail_timeout=30s;
    server 192.168.0.103:8080 max_fails=3 fail_timeout=30s;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;max_fails&lt;/code&gt;: failure threshold&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fail_timeout&lt;/code&gt;: temporarily stop sending traffic to that node&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Summary
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Round-robin is the default and easiest.&lt;/li&gt;
&lt;li&gt;Weighted round-robin helps with uneven node capacity.&lt;/li&gt;
&lt;li&gt;Least connections is useful when requests have uneven duration.&lt;/li&gt;
&lt;li&gt;IP hash and URI hash help with affinity.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;max_fails&lt;/code&gt; and &lt;code&gt;fail_timeout&lt;/code&gt; provide basic resilience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common Nginx Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test configuration syntax&lt;/span&gt;
nginx &lt;span class="nt"&gt;-t&lt;/span&gt;

&lt;span class="c"&gt;# Start Nginx (if not running as a system service)&lt;/span&gt;
nginx
&lt;span class="c"&gt;# Or specify a config file:&lt;/span&gt;
nginx &lt;span class="nt"&gt;-c&lt;/span&gt; /path/to/nginx.conf

&lt;span class="c"&gt;# Reload gracefully&lt;/span&gt;
nginx &lt;span class="nt"&gt;-s&lt;/span&gt; reload

&lt;span class="c"&gt;# Graceful stop&lt;/span&gt;
nginx &lt;span class="nt"&gt;-s&lt;/span&gt; quit

&lt;span class="c"&gt;# Force stop&lt;/span&gt;
nginx &lt;span class="nt"&gt;-s&lt;/span&gt; stop

&lt;span class="c"&gt;# Reopen log files (useful after log rotation)&lt;/span&gt;
nginx &lt;span class="nt"&gt;-s&lt;/span&gt; reopen

&lt;span class="c"&gt;# Show version&lt;/span&gt;
nginx &lt;span class="nt"&gt;-v&lt;/span&gt;

&lt;span class="c"&gt;# Show version and build options&lt;/span&gt;
nginx &lt;span class="nt"&gt;-V&lt;/span&gt;

&lt;span class="c"&gt;# Show running processes&lt;/span&gt;
ps aux | &lt;span class="nb"&gt;grep &lt;/span&gt;nginx

&lt;span class="c"&gt;# If managed by systemd&lt;/span&gt;
systemctl start nginx
systemctl stop nginx
systemctl reload nginx
systemctl status nginx

&lt;span class="c"&gt;# (Optional) hot upgrade binary without dropping connections&lt;/span&gt;
&lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="nt"&gt;-USR2&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /var/run/nginx.pid&lt;span class="sb"&gt;`&lt;/span&gt;

&lt;span class="c"&gt;# Check listening ports&lt;/span&gt;
netstat &lt;span class="nt"&gt;-tulpn&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Timeout Settings
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Client-side timeouts
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Directive&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;client_header_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Max time to receive request headers from the client. Returns 408 on timeout.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;client_body_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Max time to receive the request body. Returns 408 on timeout.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;send_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;If the client does not read any response data within this time, Nginx closes the connection.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  client_header_timeout 10s;
  client_body_timeout   30s;
  send_timeout          120s;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2) Reverse proxy (&lt;code&gt;proxy_pass&lt;/code&gt;) timeouts
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Directive&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_connect_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Timeout for establishing TCP connection to upstream (handshake). Returns 504 on timeout.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_send_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Timeout for sending request to upstream (write). Returns 504 on timeout.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_read_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Timeout for reading the response from upstream. Returns 504 on timeout.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_buffering&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;on&lt;/td&gt;
&lt;td&gt;When buffering is enabled, read timeout can behave differently. Turning it off can make streaming responses behave more predictably.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
  location /api/ {
    proxy_pass            http://backend;
    proxy_connect_timeout 120s;
    proxy_send_timeout    120s;
    proxy_read_timeout    300s;
    proxy_buffering       off;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3) FastCGI / uWSGI / SCGI timeouts
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Module&lt;/th&gt;
&lt;th&gt;Directive&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;FastCGI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fastcgi_connect_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Connection timeout to FastCGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fastcgi_send_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Send timeout to FastCGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fastcgi_read_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Read timeout from FastCGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;uWSGI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;uwsgi_connect_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Connection timeout to uWSGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;uwsgi_send_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Send timeout to uWSGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;uwsgi_read_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Read timeout from uWSGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SCGI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;scgi_connect_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Connection timeout to SCGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;scgi_send_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Send timeout to SCGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;scgi_read_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;td&gt;Read timeout from SCGI server&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;location ~ \.php$ {
  fastcgi_pass            unix:/run/php-fpm.sock;
  fastcgi_connect_timeout 30s;
  fastcgi_send_timeout    180s;
  fastcgi_read_timeout    180s;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4) Stream (TCP/UDP) proxy timeouts
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stream {
  upstream mysql_up {
    server 127.0.0.1:3306;
  }

  server {
    listen 3307;
    proxy_pass            mysql_up;
    proxy_connect_timeout 10s;
    proxy_read_timeout    300s;
    proxy_send_timeout    300s;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Keepalive
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Client ↔ Nginx keepalive
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Directive&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;keepalive_timeout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;75s&lt;/td&gt;
&lt;td&gt;Idle keepalive timeout after a request completes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;keepalive_requests&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;Max requests per keepalive connection&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  keepalive_timeout  65s;
  keepalive_requests 200;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2) Nginx ↔ upstream keepalive (connection reuse)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
  upstream backend {
    server 192.168.0.101:8080;
    server 192.168.0.102:8080;
    keepalive 32;
  }

  server {
    listen 80;

    location / {
      proxy_pass http://backend;

      # Use HTTP/1.1 and clear Connection header for upstream keepalive
      proxy_http_version 1.1;
      proxy_set_header Connection "";
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;proxy_http_version 1.1&lt;/code&gt;: required for upstream keepalive&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_set_header Connection ""&lt;/code&gt;: prevents &lt;code&gt;Connection: close&lt;/code&gt; from breaking reuse&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>beginners</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
