<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: rising_segun</title>
    <description>The latest articles on Forem by rising_segun (@geosegun).</description>
    <link>https://forem.com/geosegun</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/geosegun"/>
    <language>en</language>
    <item>
      <title>Heroku Alternatives Decision Framework: What Actually Matters When Picking a PaaS</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Fri, 30 Jan 2026 10:43:00 +0000</pubDate>
      <link>https://forem.com/seenode/heroku-alternatives-decision-framework-what-actually-matters-when-picking-a-paas-3om3</link>
      <guid>https://forem.com/seenode/heroku-alternatives-decision-framework-what-actually-matters-when-picking-a-paas-3om3</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; After Heroku killed their free tier, everyone rushed to Railway, Render, and Fly. Three years later, here is the framework that actually matters for picking between them based on &lt;strong&gt;billing model&lt;/strong&gt;, not feature lists.&lt;/p&gt;

&lt;p&gt;When Heroku discontinued their free tier in November 2022, hundreds of thousands of projects had to migrate. Most comparison posts you will find today are just feature lists. This guide focuses on what matters &lt;strong&gt;in practice&lt;/strong&gt; after using all of these platforms in production: how they &lt;strong&gt;bill you&lt;/strong&gt; and how that interacts with your traffic pattern and your tolerance for surprise invoices.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Quick Decision Helper&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Need predictable costs?&lt;/strong&gt; → Fixed tiers (Seenode, Render)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic varies wildly?&lt;/strong&gt; → Usage-based (Railway, Fly.io)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Just a static site?&lt;/strong&gt; → Serverless (Vercel, Netlify)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Need enterprise features?&lt;/strong&gt; → Render&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Budget under $15/month?&lt;/strong&gt; → Seenode or self-hosted&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The One Thing Everyone Gets Wrong&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;People compare features. They should compare &lt;strong&gt;billing models&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your choice is almost never about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Which has the best dashboard?"&lt;/li&gt;
&lt;li&gt;"Which supports my framework?"&lt;/li&gt;
&lt;li&gt;"Which has the nicest CLI?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most modern PaaS providers have reasonable dashboards, support the common languages and frameworks, and give you some batteries-included tooling. The real question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Which billing model matches your traffic pattern and your budget anxiety tolerance?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you choose the wrong model, the platform can be technically great and still feel terrible to use, because every deploy becomes a question of, &lt;em&gt;"What will the bill look like next month?"&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Migrating from Heroku?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you're coming from Heroku's free tier, the biggest shock is usually the cost. Heroku's free tier spoiled us—most alternatives start at $4–20/month. The good news: you're getting better performance, more predictable uptime, and actual support.&lt;/p&gt;

&lt;p&gt;The closest "drop-in" replacements are &lt;strong&gt;Railway&lt;/strong&gt; (similar workflow) and &lt;strong&gt;Seenode&lt;/strong&gt; (similar economics). &lt;strong&gt;Render&lt;/strong&gt; feels more "enterprise" but costs more.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Three Billing Models That Emerged&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Over the last few years, three dominant billing models have crystallized.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Model 1: Fixed Per-Node Tiers (Seenode, Render)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You pay a flat monthly rate for provisioned capacity. Whether your app is slammed 24/7 or idle for half the month, the bill barely moves.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Predictable&lt;/strong&gt;: You know roughly what next month’s invoice will be.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple&lt;/strong&gt;: You pay for the size and count of services and databases, not for each individual CPU cycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wasteful at low usage&lt;/strong&gt;: You still pay even when your app is mostly idle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Steady traffic, budget-conscious teams, and people who hate billing surprises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example costs (Jan 2026):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Seenode: &lt;strong&gt;$4/month&lt;/strong&gt; (Basic web + Tier 1 Postgres)&lt;/li&gt;
&lt;li&gt;Seenode: &lt;strong&gt;$11/month&lt;/strong&gt; (Standard web + Tier 2 Postgres)&lt;/li&gt;
&lt;li&gt;Render: &lt;strong&gt;$13/month&lt;/strong&gt; (Starter web + Basic Postgres)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your app is always on and has relatively consistent traffic, fixed tiers tend to win. You trade theoretical efficiency for billing sanity.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Model 2: Usage-Based Metering (Railway, Fly.io)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You pay for what you actually use: per-minute compute, per-second CPU, per-request, per-GB of storage and bandwidth.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficient at low/variable usage&lt;/strong&gt;: Quiet apps can be very cheap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scales smoothly with traffic&lt;/strong&gt;: Spikes cost more, but you do not need to resize instances manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unpredictable&lt;/strong&gt;: If you do not monitor usage, bills can drift or spike.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Variable traffic, developers who monitor usage, or apps that must be 24/7 online but receive low or bursty traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example costs (Jan 2026):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Railway: $20–30/month typical (Pro plan with $20 included credit), can spike to $50+ with sustained traffic.&lt;/li&gt;
&lt;li&gt;Fly.io: $0–15/month for hobby apps without managed Postgres, $43–50/month once you add managed Postgres.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Usage-based systems reward teams that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instrument and monitor their apps.&lt;/li&gt;
&lt;li&gt;Understand their baseline traffic profile.&lt;/li&gt;
&lt;li&gt;Are willing to occasionally dig through billing dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If that doesn't describe you, this model can feel like a tax audit every month.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Model 3: Serverless (Vercel, Netlify)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You pay per &lt;strong&gt;function invocation&lt;/strong&gt;. There is no persistent application process; instead, your code runs on demand in short-lived serverless functions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazing for JAMstack&lt;/strong&gt;: Static-first sites, Next.js, Astro, Remix, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Great for spiky traffic&lt;/strong&gt;: You only pay when people actually hit your endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Awkward for stateful backends&lt;/strong&gt;: Long-lived connections, background jobs, or session-heavy apps do not fit naturally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; JAMstack apps, modern React frameworks, marketing sites, and dashboards with lightweight APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not suitable for:&lt;/strong&gt; Traditional backend frameworks like Django, Rails, or Express apps that rely on in-memory state, sticky sessions, or complex background processing.&lt;/p&gt;

&lt;p&gt;If your mental model of “backend” is a long-running process with queues, WebSockets, and custom workers, serverless can feel like fighting the platform instead of using it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What It Actually Costs (Real Numbers, Jan 2026)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To make this concrete, consider a typical full-stack app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web service with roughly &lt;strong&gt;1 GB RAM&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Managed Postgres with &lt;strong&gt;1 GB storage&lt;/strong&gt; (plus sensible defaults)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is what that looks like across platforms today:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Monthly Cost&lt;/th&gt;
&lt;th&gt;Billing Model&lt;/th&gt;
&lt;th&gt;What You Get&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Seenode&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$11&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fixed tier&lt;/td&gt;
&lt;td&gt;Standard web + Tier 2 Postgres, always-on&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Fly.io&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$12–15&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Usage-based&lt;/td&gt;
&lt;td&gt;Self-managed Postgres (or $43–50/month with managed Postgres)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Railway&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$20–30&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Usage-based&lt;/td&gt;
&lt;td&gt;Pro plan with $20 credit, can spike with traffic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Render&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$57+&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fixed tier&lt;/td&gt;
&lt;td&gt;Web service + Postgres with PITR and zero-downtime deploys&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The cliff nobody mentions:&lt;/strong&gt;  Fly.io looks cheap ($0–15) for hobby apps without managed Postgres, but once you turn on managed Postgres, the bill often jumps up rather fast ($45-50). The platform is still good—but the mental model of it being "cheap" quietly disappears.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Decision Framework&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Stop comparing dashboards and feature checklists. Walk through these questions instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Question 1: Can you tolerate variable billing?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Yes, I monitor usage and I am okay with variance&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Choose &lt;strong&gt;Railway&lt;/strong&gt; or &lt;strong&gt;Fly.io&lt;/strong&gt;. You get flexible scaling and can squeeze out cost efficiencies if you understand your workload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No, I need predictable monthly costs&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Choose &lt;strong&gt;Seenode&lt;/strong&gt; or &lt;strong&gt;Render&lt;/strong&gt;. Fixed tiers mean you know the number before the invoice arrives.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you feel stress every time a cloud bill arrives, treat predictability as a core feature—not a nice-to-have.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Question 2: What is your budget ceiling?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use this as a rough mapping for a single production app with a database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;$5–15/month&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seenode&lt;/strong&gt; (Basic or Standard tiers: &lt;strong&gt;$4–11&lt;/strong&gt;/month)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted&lt;/strong&gt; on a low-cost VPS (Hetzner, Contabo, etc.), if you are willing to manage infra.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;$20–50/month&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Railway&lt;/strong&gt; (Pro plan, potentially higher with sustained traffic)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seenode&lt;/strong&gt; higher tiers if you need more resources but want fixed pricing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;$50–100/month&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Render&lt;/strong&gt; (app + managed Postgres with PITR)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fly.io&lt;/strong&gt; with managed Postgres and multi-region setups.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;$100+/month&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Render&lt;/strong&gt; with enterprise-style features (SSO, high availability, advanced backups).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These numbers change over time, but the &lt;strong&gt;shape&lt;/strong&gt; of the tradeoffs does not.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Question 3: Do you need enterprise features?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Yes – compliance, SOC2, SSO, PITR, zero-downtime deploys are non-negotiable&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Lean toward &lt;strong&gt;Render&lt;/strong&gt;. It is opinionated, boring in the best way, and designed for teams that want stability and support more than they want to save $20/month.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No – I mostly need HTTPS, logs, deploys, and a Postgres database&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Seenode&lt;/strong&gt; or &lt;strong&gt;Railway&lt;/strong&gt; are usually a better fit. They give you the essentials without pushing you up into enterprise pricing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If legal and compliance teams are involved, the cheapest platform is rarely the right one.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Question 4: Is global distribution critical?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Yes – my users are spread across regions and latency matters&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Fly.io&lt;/strong&gt; shines here, with multi-region deployments as a first-class concept.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No – most users are in one region and latency is fine&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Seenode&lt;/strong&gt;, &lt;strong&gt;Railway&lt;/strong&gt;, or &lt;strong&gt;Render&lt;/strong&gt; will do the job. Focus on simplicity and billing, not on global replicas you may never need.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What I Actually Use (and Why)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Disclosure: I work on &lt;strong&gt;Seenode&lt;/strong&gt;, so understand the bias. That said, here is the honest breakdown of what I reach for in different scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;For hobby projects&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choice:&lt;/strong&gt; Seenode Basic (&lt;strong&gt;$4/month&lt;/strong&gt;)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why:&lt;/strong&gt; It is the &lt;strong&gt;cheapest always-on option with a database included&lt;/strong&gt; that still feels like Heroku. I can &lt;code&gt;git push&lt;/code&gt;, get HTTPS and logs, and forget about it. At $4/month, it is psychologically close to “free” but without the randomness of free-tier shutdowns.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;For client projects&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choice:&lt;/strong&gt; Render
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why:&lt;/strong&gt; Clients pay for &lt;strong&gt;peace of mind&lt;/strong&gt;. Render gives you:

&lt;ul&gt;
&lt;li&gt;Managed Postgres with &lt;strong&gt;point-in-time recovery (PITR)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Zero-downtime deploys
&lt;/li&gt;
&lt;li&gt;A mature, predictable environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The extra cost (often $57+/month for a typical setup) is easy to justify compared to the cost of a single outage for a paying client.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;For high-traffic production on a coffee-priced budget&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choice:&lt;/strong&gt; Self-hosted on &lt;strong&gt;Hetzner&lt;/strong&gt; (or similar low-cost VPS) with &lt;strong&gt;Coolify&lt;/strong&gt; or &lt;strong&gt;Dokploy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Many services&lt;/li&gt;
&lt;li&gt;Heavy background workers&lt;/li&gt;
&lt;li&gt;Higher traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…and still want to stay under &lt;strong&gt;$50/month&lt;/strong&gt;, managed PaaS platforms become harder to justify. Self-hosting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compresses all infra cost into a single VPS bill.&lt;/li&gt;
&lt;li&gt;Gives you full control over resource allocation.&lt;/li&gt;
&lt;li&gt;Moves &lt;strong&gt;all responsibility for uptime&lt;/strong&gt; onto you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Downtime in this model is almost always your fault—bad deploys, poor monitoring, slow incident response. You save money but pay in operations time and cognitive load.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Missing Costs Nobody Mentions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Looking only at base instance prices hides a lot of real-world cost. Four common gotchas:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common mistake:&lt;/strong&gt; Picking a platform based on the cheapest headline price, then getting surprised by egress fees, backup costs, or usage spikes. Always factor in the hidden costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Egress fees&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Railway: &lt;strong&gt;$0.05/GB&lt;/strong&gt; outbound&lt;/li&gt;
&lt;li&gt;Fly.io: &lt;strong&gt;$0.02/GB&lt;/strong&gt; outbound&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you run a webhook-heavy app, file-serving API, or anything data-intensive, egress can quietly add &lt;strong&gt;$50–200/month&lt;/strong&gt; on top of your compute bill.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Railway’s credit system&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hobby&lt;/strong&gt;: $1/month subscription with &lt;strong&gt;$5 usage credit&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro&lt;/strong&gt;: $20/month subscription with &lt;strong&gt;$20 usage credit&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once your actual usage exceeds that credit, &lt;strong&gt;metered charges&lt;/strong&gt; apply. For small apps this is fine; for growing usage it can turn into an invisible ratchet until you look closely at the billing breakdown.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Stopped machines and storage&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fly.io continues billing &lt;strong&gt;storage&lt;/strong&gt; even when machines are stopped: around &lt;strong&gt;$0.15/GB/month&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stopping everything in a panic does not always reset the bill to zero. Persistent volumes, snapshots, and images keep ticking along until you explicitly delete them.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Database backups and safety nets&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Render&lt;/strong&gt;: Includes PITR on paid database plans. This is a genuine safety feature.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Railway / Fly.io&lt;/strong&gt;: You usually need to:

&lt;ul&gt;
&lt;li&gt;Configure backups yourself, or
&lt;/li&gt;
&lt;li&gt;Rely on third-party tools, or
&lt;/li&gt;
&lt;li&gt;Accept a more limited restore story.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This is invisible until something goes wrong—and then it is the only line item that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Closest Alternatives to OG Heroku&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If what you really want is &lt;strong&gt;“git push and boom, we are live”&lt;/strong&gt;, here is how the modern landscape looks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Railway&lt;/strong&gt; – Captures a lot of the Heroku magic. Simple, fast to onboard. The tradeoff is &lt;strong&gt;billing unpredictability&lt;/strong&gt; if you are not watching usage.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seenode&lt;/strong&gt; – Leans into Heroku-style simplicity with &lt;strong&gt;Heroku-free-tier-style economics&lt;/strong&gt;. At &lt;strong&gt;$4/month&lt;/strong&gt;, it is psychologically very close to the old “free dyno” experience.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Render&lt;/strong&gt; – The most &lt;strong&gt;production-ready&lt;/strong&gt; choice, especially for serious projects and client work. More expensive, but you get strong defaults and guardrails.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything else in the ecosystem is either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More serverless and front-end centric (Vercel, Netlify), or
&lt;/li&gt;
&lt;li&gt;More infra-heavy and DIY (Cloud providers, raw Kubernetes, self-hosting).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They can be great—but they are no longer "Heroku for everyone."&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Quick Comparison Summary&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Monthly Cost&lt;/th&gt;
&lt;th&gt;Billing Model&lt;/th&gt;
&lt;th&gt;Key Differentiator&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Seenode&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hobby projects, predictable costs&lt;/td&gt;
&lt;td&gt;$4–11&lt;/td&gt;
&lt;td&gt;Fixed&lt;/td&gt;
&lt;td&gt;Cheapest always-on with DB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Railway&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Variable traffic, Heroku-like workflow&lt;/td&gt;
&lt;td&gt;$20–30+&lt;/td&gt;
&lt;td&gt;Usage-based&lt;/td&gt;
&lt;td&gt;Simplicity + flexibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Render&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Client work, enterprise needs&lt;/td&gt;
&lt;td&gt;$57+&lt;/td&gt;
&lt;td&gt;Fixed&lt;/td&gt;
&lt;td&gt;Production-ready defaults&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Fly.io&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Global distribution, bursty traffic&lt;/td&gt;
&lt;td&gt;$0–50+&lt;/td&gt;
&lt;td&gt;Usage-based&lt;/td&gt;
&lt;td&gt;Multi-region by default&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key Takeaways&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Billing model matters more than features&lt;/strong&gt; — Match it to your traffic pattern&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hidden costs add up&lt;/strong&gt; — Egress, storage, backups can double your bill&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictability vs. flexibility&lt;/strong&gt; — Choose based on your tolerance for billing surprises&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise features cost money&lt;/strong&gt; — Only pay for them if you actually need them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosting saves money&lt;/strong&gt; — But you pay in operational overhead&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So… What Should You Actually Do?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When choosing a PaaS in 2026, ask yourself one question first:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Is predictable billing or pay-as-you-go flexibility more important for this project?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then layer on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How much operational burden am I willing to accept?&lt;/li&gt;
&lt;li&gt;Do I actually need global distribution, or does a single region work?&lt;/li&gt;
&lt;li&gt;Is this a hobby app, client app, or revenue-critical system?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you answer those honestly, the "Heroku alternatives" list narrows itself down very quickly.&lt;/p&gt;

&lt;p&gt;Use this checklist as you evaluate providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your stack (language, framework, database)&lt;/li&gt;
&lt;li&gt;Rough traffic pattern (requests/day or month)&lt;/li&gt;
&lt;li&gt;Maximum monthly budget&lt;/li&gt;
&lt;li&gt;Whether you need enterprise features (compliance, SSO, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With clear answers to those questions, you can quickly narrow down to one or two platforms whose billing models actually fit how your app is used.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Script automation explained – what it is, tools, benefits, and real examples</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Sun, 17 Aug 2025 16:41:00 +0000</pubDate>
      <link>https://forem.com/cloudray/script-automation-explained-what-it-is-tools-benefits-and-real-examples-1mg4</link>
      <guid>https://forem.com/cloudray/script-automation-explained-what-it-is-tools-benefits-and-real-examples-1mg4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa28oos35arduzsl3bmo3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa28oos35arduzsl3bmo3.jpg" alt="Screenshot of adding a new setup script" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Script automation is the use of code, written in languages such as Bash, Python, or PowerShell, to automate repetitive or time‑consuming tasks in IT operations, system administration, and software development. Instead of performing tasks manually, teams can &lt;a href="https://cloudray.io/docs/scripts" rel="noopener noreferrer"&gt;run scripts&lt;/a&gt; to trigger processes such as application deployment, &lt;a href="https://cloudray.io/articles/automate-wordpress-multi-site-backups" rel="noopener noreferrer"&gt;data backups&lt;/a&gt;, file transfers, or system monitoring.&lt;/p&gt;

&lt;p&gt;Many businesses adopt script automation to reduce human error, save time, improve efficiency, and accelerate DevOps workflows. As the demand for faster software delivery and continuous integration/continuous deployment (CI/CD) grows, script automation has become an essential part of modern DevOps strategies.&lt;/p&gt;

&lt;p&gt;In this article, we explore the key benefits of script automation, popular scripting languages, top tools, and real‑world examples to help you apply it effectively in your DevOps and IT operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Best Scripting Languages for Automation&lt;/li&gt;
&lt;li&gt;Benefits of Script Automation&lt;/li&gt;
&lt;li&gt;
Top 5 Script Automation Tools

&lt;ul&gt;
&lt;li&gt;CloudRay&lt;/li&gt;
&lt;li&gt;ScriptRunner&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;AttuneOps&lt;/li&gt;
&lt;li&gt;Jenkins&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Examples of Script Automation in Bash&lt;/li&gt;

&lt;li&gt;Wrapping up&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Scripting Languages for Automation
&lt;/h2&gt;

&lt;p&gt;There are several programming languages for script automation with unique strengths and characteristics. However, Bash scripting and Python remain the most widely used for system and DevOps automation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bash scripting:&lt;/strong&gt; This is a shell command language known for its integration with Unix-based systems. It’s ideal for automating administrative tasks such as package installation, server bootstrapping, or even deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python:&lt;/strong&gt; This is a high-level and general purpose language widely used for infrastructure automation, API scripting, and test pipelines in modern DevOps workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both Bash and Python have strengths and weaknesses depending on the use case. Below is a brief comparison:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Bash&lt;/th&gt;
&lt;th&gt;Python&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Shell scripting, Linux/Unix system tasks&lt;/td&gt;
&lt;td&gt;Cross-platform automation, APIs, DevOps Workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ease of Use&lt;/td&gt;
&lt;td&gt;Simple for basic tasks; can get complex for logic-heavy work&lt;/td&gt;
&lt;td&gt;Readable and maintainable, especially for large scripts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tooling &amp;amp; Ecosystem&lt;/td&gt;
&lt;td&gt;Native to Unix/Linux; tightly integrated with CLI tools&lt;/td&gt;
&lt;td&gt;Rich library ecosystem for HTTP, automation, DevOps, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;Fast for command chaining and shell operations&lt;/td&gt;
&lt;td&gt;Slightly slower but better for complex logic and data parsing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error Handling&lt;/td&gt;
&lt;td&gt;Primitive error handling (exit codes)&lt;/td&gt;
&lt;td&gt;Built‑in exception handling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning Curve&lt;/td&gt;
&lt;td&gt;Easier for those familiar with Linux shell&lt;/td&gt;
&lt;td&gt;Easier for general-purpose programming and logic-heavy tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Aside from Bash and Python, there are several other scripting languages for automation with each with it’s unique use cases. Here are other scripting languages widely used for IT operations and DevOps automation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Go (Golang):&lt;/strong&gt; It’s unique for task concurrent executions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PowerShell:&lt;/strong&gt; Designed for windows automation which is great for managing system configurations and registry task&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Java:&lt;/strong&gt; Often used for DevOps pipeline automation and integrations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ruby:&lt;/strong&gt; popular for automating configuration management&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Perl:&lt;/strong&gt; Powerful for file processing and text manipulation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;JavaScript:&lt;/strong&gt; Useful for automating web APIs or build processes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these scripting languages offers unique strengths and limitations with each one best known for specific automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Script Automation
&lt;/h2&gt;

&lt;p&gt;The benefits of script automation are significant. As businesses scale their infrastructure and software operations, manual processes become time‑consuming, tedious, and error‑prone. Script automation boosts productivity, reduces human error, and accelerates software delivery and deployments.&lt;/p&gt;

&lt;p&gt;Here are some benefits of using script automation in DevOps workflows and IT environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Optimisation:&lt;/strong&gt; Generally, script automation reduces the need for human input which eventually cuts down human work hours and minimize costly mistake. Teams saves both time and money by automating routine tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Faster Tasks Execution:&lt;/strong&gt; Automation can execute tasks in minutes even seconds compared to manual efforts which can take hours or days to execute. Script automation leads to faster incident recovery, deployments, and even overall performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Accuracy and consistency:&lt;/strong&gt; Manual operation is error prone. However, with script automation, operations are executed with consistency across various environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Productivity:&lt;/strong&gt; Script automation frees up teams from repetitive tasks. This allows team to focus on higher priority work such as innovation, workflow optimisation, and security hardening.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seamless Integration with DevOps Tools&lt;/strong&gt; Scripts can be easily integrated with CI/CD and configuration management tools. They can work with this tools efficiently to trigger deployments, automate test run, and so on.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Top 5 Script Automation Tools
&lt;/h2&gt;

&lt;p&gt;Script automation tools has become important for teams that want to scale faster, increase productivity, and optimise costs. These tools empowers DevOps engineers, system admins, and cloud engineers to streamline operations.&lt;/p&gt;

&lt;p&gt;Here are top five best script automation tool used by modern engineering teams:&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudRay
&lt;/h3&gt;

&lt;p&gt;CloudRay is a centralised Bash script automation platform that allows team to manage cloud and hybrid infrastructure. It allows team to run Bash scripts securely across hybrid or cloud infrastructure with the help of an &lt;a href="https://cloudray.io/docs/agent" rel="noopener noreferrer"&gt;Agent&lt;/a&gt;. This unique feature makes it suitable for automating repetitive infrastructure tasks such as Installations, deployments, server maintenance, backups and other infrastructure tasks.&lt;/p&gt;

&lt;p&gt;Additionally, &lt;a href="https://cloudray.io/docs/schedules" rel="noopener noreferrer"&gt;CloudRay’s schedules&lt;/a&gt; allow teams to schedules scripts across multiple environments. It also supports &lt;a href="https://cloudray.io/docs/incoming-webhooks" rel="noopener noreferrer"&gt;webhook triggers&lt;/a&gt;, making automation repeatable and event-driven. CloudRay stands out by combining the flexibility of scripting with the governance enterprise teams need to scale securely.&lt;/p&gt;

&lt;h3&gt;
  
  
  ScriptRunner
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.scriptrunner.com/" rel="noopener noreferrer"&gt;ScriptRunner&lt;/a&gt; is an automation platform specifically designed for PowerShell. It’s used by Windows admins to automate routine admin tasks with full auditability and governance. It provides a centralised environment for storing, managing, and executing PowerShell scripts with control and traceability. This tool also supports approvals, Active Directory integration, and delegated execution and logging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ansible
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.ansible.com/" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; is an open source configuration management tool developed by Red Hat. It uses YAML-based playbooks for infrastructure wide automation and is popular for managing complex infrastructure at scale. Ansible unique characteristics is its agentless nature in which it can operate over SSH allowing easier adoption.&lt;/p&gt;

&lt;h3&gt;
  
  
  AttuneOps
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://attuneops.io/" rel="noopener noreferrer"&gt;AttuneOps&lt;/a&gt; is a script automation tool that provides advanced orchestration, scheduling and workflow management. It supports multiple scripting languages such as PowerShell, Bash, and Python.&lt;/p&gt;

&lt;p&gt;It provides a centralised engine that allows team to automate across different OS environment consistency. It is used heavily in IT operations allowing teams to manage routine and repetitive tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.jenkins.io/" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt; is the widely used automation platform for CI/CD in DevOps workflow. It excels at executing automation scripts and scheduled jobs. Jenkins supports multiple scripts such as shell and Python script. This integrates with the source control tools, and offers robust scheduling workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples of Script Automation in Bash
&lt;/h2&gt;

&lt;p&gt;Bash script can be used to automate routine system administration tasks. These tasks can be installation of packages, database backups, provisioning of servers, and deployments.&lt;/p&gt;

&lt;p&gt;Let’s look at some practical examples of Bash script automation that improves IT operations and DevOps workflows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Automating LAMP Stack Installations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bash script can be used to set up web servers quickly. You can &lt;a href="https://cloudray.io/articles/automate-installation-of-lamp-stack-on-ubuntu-using-bash-script" rel="noopener noreferrer"&gt;automate the installation of LAMP stack (Linux, Apache, MySQL, PHP)&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -e

# Update package list and install LAMP stack
sudo apt update
sudo apt install apache2 mysql-server php libapache2-mod-php -y

# Start services
sudo systemctl enable apache2
sudo systemctl enable mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will install all the required components needed and ensures Apache and MySQL automatically starts on system reboot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Automating MySQL Backups to S3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bash script can be used to automate and schedule routine database backup. For example, you can &lt;a href="https://cloudray.io/articles/automate-mysql-backup-to-amazon-s3" rel="noopener noreferrer"&gt;automate MySQL backups to Amazon S3&lt;/a&gt; to ensure your data is consistently available offsite.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -e

# Variables
DB_NAME="mydb"
USER="root"
PASSWORD="yourpassword"
BACKUP_PATH="/tmp/mysql-backup.sql"
DATE=$(date +%F)

mysqldump -u $USER -p$PASSWORD $DB_NAME &amp;gt; $BACKUP_PATH
aws s3 cp $BACKUP_PATH s3://your-s3-bucket/$DB_NAME-$DATE.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use it to &lt;a href="https://cloudray.io/articles/automate-postgres-backup-to-amazon-s3" rel="noopener noreferrer"&gt;automate backup of PostgreSQL to S3&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Automating Installation of WordPress&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use Bash script to streamline and automate the deployment of CMS platform like WordPress. For example, you can &lt;a href="https://cloudray.io/articles/deploy-multi-wordpress-sites-on-one-server" rel="noopener noreferrer"&gt;automate the deployment of multiple WordPress site on a single server&lt;/a&gt; with the use of Bash script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -e

# Update package list and install LAMP stack
sudo apt update
sudo apt install apache2 mysql-server php php-mysql -y

# Create directories for multiple sites
sudo mkdir -p /var/www/site1.com /var/www/site2.com

# Set permissions
sudo chown -R $USER:$USER /var/www/site1.com /var/www/site2.com

# Download and extract WordPress
wget https://wordpress.org/latest.tar.gz
tar -xvzf latest.tar.gz
cp -r wordpress/* /var/www/site1.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, you can also use Bash script to &lt;a href="https://cloudray.io/articles/automate-wordpress-multi-site-backups" rel="noopener noreferrer"&gt;automate the backup process of your WordPress site&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Deploy a Database Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;infrastructure engineers use Bash scipt to save time during infrastructure setup. You can use Bash script to &lt;a href="https://cloudray.io/articles/deploy-mysql-server" rel="noopener noreferrer"&gt;automate the deployment of MySQL server&lt;/a&gt; on a Linux host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -e

# Update package list and install MySQL server
sudo apt update
sudo apt install -y mysql-server
sudo systemctl enable mysql
sudo systemctl start mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Script automation is an efficient way for DevOps and IT professionals to streamline DevOps tasks, IT operations, and reduce the likelihood for human error. Whether you’re automating deployments, backups, or security checks, having a well-structured approach enhances productivity and reliability. As infrastructure grow, so does the complexity to manage these growth. Script automation becomes not just a convenience but a necessity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudray.io" rel="noopener noreferrer"&gt;CloudRay&lt;/a&gt; is a leading platform for centralised Bash script automation across your cloud and server infrastructure. With &lt;a href="https://cloudray.io/docs/agent" rel="noopener noreferrer"&gt;CloudRay Agent&lt;/a&gt;, you can securely connect your cloud instances and on-premise servers, enabling real-time execution and monitoring of scripts from a single control panel. Our powerful &lt;a href="https://cloudray.io/docs/schedules" rel="noopener noreferrer"&gt;Schedules&lt;/a&gt; feature allows you to automate scripts at custom intervals, whether hourly, daily, or triggered by specific events ensuring your DevOps workflows run reliably without manual intervention. CloudRay simplifies script management, increases operational efficiency, and gives teams full control over infrastructure automation, all from one unified interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.cloudray.io/f/auth/sign-up" rel="noopener noreferrer"&gt;Get Started with CloudRay&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Automate AWS EC2, Backups and Monitoring Using Bash Scripts</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Fri, 23 May 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/cloudray/how-to-automate-aws-ec2-backups-and-monitoring-using-bash-scripts-d5j</link>
      <guid>https://forem.com/cloudray/how-to-automate-aws-ec2-backups-and-monitoring-using-bash-scripts-d5j</guid>
      <description>&lt;p&gt;Automating AWS infrastructure is a key practice for modern DevOps teams and cloud engineers. While Infrastructure as Code (IaC) tools like &lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;CloudFormation&lt;/a&gt; have become the industry standard, Bash scripting remains a powerful and accessible way to automate AWS tasks. Bash scripts is especially useful for teams that wants lightweights, fast, and scriptable workflows without the overhead of learning a new Domain Specific Languages (DSL).&lt;/p&gt;

&lt;p&gt;In this article, you will learn how to automate common AWS tasks using plain Bash scripts combined with AWS CLI. Whether you’re launching EC2 instances, taking automated EBS snapshots, or setting up monitoring scripts to track CPU utilisation, Bash provides a direct and flexible approach to get things done quickly. At the end, you will learn real-world use case on how to use bash script based automation more effectively using &lt;a href="https://app.cloudray.io/" rel="noopener noreferrer"&gt;CloudRay&lt;/a&gt; a centralised platform for managing, scheduling, executing, and organising your scripts across environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Automation Use Cases

&lt;ul&gt;
&lt;li&gt;1. Automating EC2 Instance Launch with Bash Script&lt;/li&gt;
&lt;li&gt;2. Automating EBS Volume Backups with Bash Script&lt;/li&gt;
&lt;li&gt;3. Monitoring EC2 CPU Utilisation with Bash Script&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Real World Use Case of Automating AWS Infrastructure using CloudRay

&lt;ul&gt;
&lt;li&gt;EC2 Instance Launch with Auto-Tagging and Bootstrapping&lt;/li&gt;
&lt;li&gt;Running the Script on a Schedule with CloudRay&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Wrapping Up&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automation Use Cases
&lt;/h2&gt;

&lt;p&gt;AWS automation is not limited to large-scale infrastructure provisioning. With just Bash and the AWS CLI, you can automate a variety of real-world tasks such as launching of instances, backing up data, and monitoring system performance. These use cases are useful when you need a quick scripts to integrate into cron jobs, CI/CD pipelines, or even internal tools.&lt;/p&gt;

&lt;p&gt;To follow along, ensure that your AWS CLI is properly installed and configured. If not, refer to the &lt;a href="https://cloudray.io/articles/aws-cli-setup-guide#installing-the-aws-cli" rel="noopener noreferrer"&gt;AWS CLI Setup Guide&lt;/a&gt; for a complete walkthrough on installing the CLI, creating key pairs, and configuring credentials.&lt;/p&gt;

&lt;p&gt;Below are some of the most practical AWS automation tasks you can implement using Bash scripts.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Automating EC2 Instance Launch with Bash Script
&lt;/h3&gt;

&lt;p&gt;Automating the creation, termination, and monitoring of EC2 instance is a common use case of AWS infrastructure management. With a simple bash script, you can reduce the manual steps and repeatability especially when managing development, staging, or even short-lived workloads for testing.&lt;/p&gt;

&lt;p&gt;To begin, create a bash script file named &lt;code&gt;launch-ec2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano launch-ec2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following script to the file to launch EC2 instance, wait for it to become available, retrieve public IP address, and list the attached EBS volumes of the instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Launch a new EC2 instance
INSTANCE_ID=$(aws ec2 run-instances \
  --image-id ami-084568db4383264d4 \ # Replace with your preferred AMI
  --count 1 \
  --instance-type t2.micro \
  --key-name my-production-key \ # Replace with your existing key name
  --security-group-ids sg-0269249118de8b4fc \ # Replace with your Security Group ID
  --query 'Instances[0].InstanceId' \
  --output text)

echo "Launched EC2 Instance with ID: $INSTANCE_ID"

# Wait until instance is running
aws ec2 wait instance-running --instance-ids $INSTANCE_ID
echo "Instance is now running."

# Fetch public IP address
PUBLIC_IP=$(aws ec2 describe-instances \
  --instance-ids $INSTANCE_ID \
  --query 'Reservations[0].Instances[0].PublicIpAddress' \
  --output text)

echo "Public IP Address: $PUBLIC_IP"

# Get associated EBS Volume ID
VOLUME_ID=$(aws ec2 describe-instances \
  --instance-ids $INSTANCE_ID \
  --query 'Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId' \
  --output text)

echo "EBS Volume attached to instance: $VOLUME_ID"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is what the script does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;launches an EC2 instance with a specified AMI, instance type, key pair, and security group&lt;/li&gt;
&lt;li&gt;Waits for the instance to become active before proceeding&lt;/li&gt;
&lt;li&gt;Retrieves and display public IP for SSH access or web server testing&lt;/li&gt;
&lt;li&gt;Fetches the EBS Volume ID for later automation (e.g., backup snapshots or monitoring usage)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TIP&lt;/p&gt;

&lt;p&gt;Make sure you replace the AMI and security group in the script with your own AMI and security group. To get the AMI, navigate to the EC2 Console, select AMIs from the sidebar, and use filters to locate the Amazon Linux or Ubuntu image you would like to use (copy its AMI ID)&lt;/p&gt;

&lt;p&gt;Next, make your script executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x launch-ec2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally you can run the scripts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./launch-ec2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your result would be similar to the below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4wrtlta68z5l0vwjvle.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4wrtlta68z5l0vwjvle.jpg" alt="screenshot showing output of EC2 automation on terminal" width="620" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This shows that the instance was created successfully and both the IP address and the EBS volume is displayed. To confirm further, you can check the AWS console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a2ydjn6hxgby3e4np9j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a2ydjn6hxgby3e4np9j.jpg" alt="screenshot showing output of EC2 automation on console" width="800" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the EC2 instance running successful.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Automating EBS Volume Backups with Bash Script
&lt;/h3&gt;

&lt;p&gt;Another critical automation task is backing up your Elastic Block Store (EBS) volumes. Regular backups ensure you can recover your data in the event of accidental deletion, instance failure, or security breaches.&lt;/p&gt;

&lt;p&gt;With a Bash script, you can create snapshots of your EBS volumes on demand or integrate them into a scheduled cron job for automated backups.&lt;/p&gt;

&lt;p&gt;To get started, create a script named &lt;code&gt;backup-ebs.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano backup-ebs.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now add the following script to automate the creation of a snapshot for a given volume and tag it for easier identification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Configuration
VOLUME_ID="vol-04fa2bf1eb229e072" # Replace with your volume ID
DESCRIPTION="Backup on $(date '+%Y-%m-%d %H:%M:%S')"
TAG_KEY="Purpose"
TAG_VALUE="AutomatedBackup"

echo "Creating snapshot of volume: $VOLUME_ID"

# Create snapshot
SNAPSHOT_ID=$(aws ec2 create-snapshot \
  --volume-id $VOLUME_ID \
  --description "$DESCRIPTION" \
  --query 'SnapshotId' \
  --output text)

echo "Snapshot created with ID: $SNAPSHOT_ID"

# Add tags to the snapshot
aws ec2 create-tags \
  --resources $SNAPSHOT_ID \
  --tags Key=$TAG_KEY,Value=$TAG_VALUE

echo "Snapshot $SNAPSHOT_ID tagged with $TAG_KEY=$TAG_VALUE"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is what the script does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Takes a snapshot of a specified EBS volume using the AWS CLI&lt;/li&gt;
&lt;li&gt;Adds a human-readable description that includes the date and time of the backup&lt;/li&gt;
&lt;li&gt;Applies tags to the snapshot so you can easily search or filter for it in the AWS console&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TIP&lt;/p&gt;

&lt;p&gt;To find your EBS Volume ID, go to the EC2 Console → Volumes → and look under the “Volume ID” column. Be sure to copy the correct volume attached to your running instance.&lt;/p&gt;

&lt;p&gt;Again, make the script executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x backup-ebs.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, run the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./backup-ebs.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, the output should display the snapshot ID along with a confirmation that it has been tagged&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2oyklh43m91bbuoqn9f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2oyklh43m91bbuoqn9f.jpg" alt="screenshot showing output of EBS automation on terminal" width="711" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This backup script is a great candidate for &lt;a href="https://cloudray.io/docs/schedules" rel="noopener noreferrer"&gt;CloudRay’s scheduler feature&lt;/a&gt; allowing you to run it every day, week, or hour without needing a separate server or cron job setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Monitoring EC2 CPU Utilisation with Bash Script
&lt;/h3&gt;

&lt;p&gt;System performance monitoring is essential for maintaining the health and stability of your applications. While AWS CloudWatch provides detailed metrics and dashboards, you can also automate metric checks using a simple Bash script&lt;/p&gt;

&lt;p&gt;One common metric to monitor is CPU utilisation. By querying CloudWatch, we can track when an EC2 instance’s CPU usage spikes above a defined threshold and respond accordingly.&lt;/p&gt;

&lt;p&gt;Start Start by creating a script file named &lt;code&gt;monitor-cpu.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano monitor-cpu.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then paste the following code into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Configuration
INSTANCE_ID="i-044166a99d5666bfc" # Replace with your instance ID
CPU_THRESHOLD=70 # Trigger alert if CPU &amp;gt; 70%
TIME_RANGE_MINUTES=60 # How far back to check

# Fetch CPU Utilisation
CPU_UTILISATION=$(aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name CPUUtilisation \
  --statistics Maximum \
  --period 300 \
  --start-time $(date -u -d "$TIME_RANGE_MINUTES minutes ago" +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --dimensions Name=InstanceId,Value=$INSTANCE_ID \
  --query 'Datapoints | sort_by(@, &amp;amp;Timestamp)[-1].Maximum' \
  --output text)

# Show retrieved metric
echo "CPU Utilisation for instance $INSTANCE_ID: $CPU_UTILIZATION%"

# Check against threshold
if (( $(echo "$CPU_UTILISATION &amp;gt; $CPU_THRESHOLD" | bc -l) )); then
  echo "⚠️ High CPU alert: $INSTANCE_ID at $CPU_UTILISATION%"
else
  echo "✅ CPU usage is within safe range."
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is what the script does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retrieves the maximum CPU utilisation from the last hour for a specific EC2 instance using CloudWatch metrics&lt;/li&gt;
&lt;li&gt;Compares it to a defined threshold (For example, 70%)&lt;/li&gt;
&lt;li&gt;Prints an alert if the usage exceeds the threshold, or a success message if within range&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make the script executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x monitor-cpu.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, run the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./monitor-cpu.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your result would be similar to the below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte8vkaytzy84zl2vdfa3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte8vkaytzy84zl2vdfa3.jpg" alt="screenshot showing output of EBS automation on terminal" width="728" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This lightweight monitoring script is perfect for integrating with a scheduled task on CloudRay. You can set it to run every 15 minutes and trigger custom actions like Slack alerts, emails, or remediation scripts whenever thresholds are breached.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real World Use Case of Automating AWS Infrastructure using CloudRay
&lt;/h2&gt;

&lt;p&gt;While Bash scripting gives you a powerful tool to automate AWS tasks locally, managing and reusing these scripts across environments becomes tedious without a centralised system. That is where CloudRay comes in.&lt;/p&gt;

&lt;p&gt;CloudRay provides a Scripts dashboard where you can centrally manage, execute, and reuse your infrastructure automation scripts without relying on manual CLI or scattered cron jobs. it supports scheduling, allowing you to trigger EC2 provisioning or any AWS operation at predefined times.&lt;/p&gt;

&lt;p&gt;Let’s walk through a real-world scenario where a DevOps engineer needs to launch a pre-configured EC2 instance every morning for development testing.&lt;/p&gt;

&lt;h3&gt;
  
  
  EC2 Instance Launch with Auto-Tagging and Bootstrapping
&lt;/h3&gt;

&lt;p&gt;Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our &lt;a href="https://cloudray.io/docs/servers" rel="noopener noreferrer"&gt;servers docs&lt;/a&gt; to add and manage your server.&lt;/p&gt;

&lt;p&gt;You can follow the below steps to create the script in CloudRay:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffra83xgt50q2hqrct7z2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffra83xgt50q2hqrct7z2.jpg" alt="screenshot showing script creation in CloudRay" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a CloudRay account at &lt;a href="https://app.cloudray.io/" rel="noopener noreferrer"&gt;https://app.cloudray.io/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Scripts&lt;/strong&gt; &amp;gt; &lt;strong&gt;New Script&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Name: &lt;code&gt;Launch EC2 for Daily Dev Testing&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Optional: user data script (e.g., install NGINX on launch)
USER_DATA_SCRIPT='#!/bin/bash
sudo apt update -y
sudo apt install -y nginx
sudo systemctl enable nginx
sudo systemctl start nginx
'

echo "[$(date)] Starting EC2 launch process..." | tee -a {{log_file}}

# Launch EC2 instance
INSTANCE_ID=$(aws ec2 run-instances \
  --image-id "{{ami_id}}" \
  --count 1 \
  --instance-type "{{instance_type}}" \
  --key-name "{{key_name}}" \
  --security-group-ids "{{security_group_id}}" \
  --block-device-mappings "[{\"DeviceName\":\"/dev/xvda\",\"Ebs\":{\"VolumeSize\":{{volume_size}}}}]" \
  --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value={{tag_name}}},{Key=Environment,Value={{environment}}}]" \
  --user-data "$(echo -n "$USER_DATA_SCRIPT" | base64 -w 0)" \
  --query 'Instances[0].InstanceId' \
  --output text)

if [[-z "$INSTANCE_ID"]]; then
  echo "[$(date)] Failed to launch instance." | tee -a {{log_file}}
  exit 1
fi

echo "[$(date)] Launched instance: $INSTANCE_ID" | tee -a {{log_file}}

# Wait until running
echo "[$(date)] Waiting for instance to enter running state..." | tee -a {{log_file}}
aws ec2 wait instance-running --instance-ids "$INSTANCE_ID"
echo "[$(date)] Instance is running." | tee -a {{log_file}}

# Fetch public IP
PUBLIC_IP=$(aws ec2 describe-instances \
  --instance-ids "$INSTANCE_ID" \
  --query 'Reservations[0].Instances[0].PublicIpAddress' \
  --output text)

LAUNCH_TIME=$(aws ec2 describe-instances \
  --instance-ids "$INSTANCE_ID" \
  --query 'Reservations[0].Instances[0].LaunchTime' \
  --output text)

echo "[$(date)] Public IP Address: $PUBLIC_IP" | tee -a {{log_file}}
echo "[$(date)] Launch Time: $LAUNCH_TIME" | tee -a {{log_file}}

# Output summary
echo ""
echo "================= EC2 Instance Launched ================="
echo "Instance ID : $INSTANCE_ID"
echo "Public IP : $PUBLIC_IP"
echo "Launch Time : $LAUNCH_TIME"
echo "Tag Name : {{tag_name}}"
echo "Environment : {{environment}}"
echo "========================================================"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script provisions a fully tagged EC2 instance, installs NGINX on launch, and logs key metadata for auditing or monitoring.&lt;/p&gt;

&lt;p&gt;Before running the scripts, you need to define values for the placeholders &lt;code&gt;{{ami_id}}&lt;/code&gt;, &lt;code&gt;{{instance_type}}&lt;/code&gt;, &lt;code&gt;{{key_name}}&lt;/code&gt; &lt;code&gt;{{security_group_id}}&lt;/code&gt;, and &lt;code&gt;{{volume_size}}&lt;/code&gt; used in the scrips. CloudRay processes all scripts as &lt;a href="https://shopify.github.io/liquid/" rel="noopener noreferrer"&gt;Liquid templates&lt;/a&gt;. This allows you to use variables dynamically across different servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplxzsawiat5ipwmvl5fh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplxzsawiat5ipwmvl5fh.jpg" alt="Screenshot of adding a new variable group" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to Variable Groups:&lt;/strong&gt; In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a new Variable Group:&lt;/strong&gt; Click on “Variable Group”.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add the following variables:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ami_id&lt;/code&gt;:&lt;/strong&gt; This is the AMI ID of your instance type (In this case it could be Ubuntu or any other operating system)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;instance_type&lt;/code&gt;:&lt;/strong&gt; This is the type of instance you want to use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;key_name&lt;/code&gt;:&lt;/strong&gt; The name of your private key in your AWS account&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;security_group_id&lt;/code&gt;:&lt;/strong&gt; This is your security group ID&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;volume_size&lt;/code&gt;:&lt;/strong&gt; The size of your EBS volume in GB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can choose to run the script using &lt;a href="https://cloudray.io/docs/script-playlists" rel="noopener noreferrer"&gt;CloudRay’s Script Playlists&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.&lt;/p&gt;

&lt;p&gt;To run the &lt;code&gt;Launch EC2 for Daily Dev Testing&lt;/code&gt;, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to Runlogs&lt;/strong&gt; : In your CloudRay project, go to the Runlogs section in the top menu.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a New Runlog&lt;/strong&gt; : Click on New Runlog.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure the Runlog&lt;/strong&gt; : Fill in the required details:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Server: Select the server you added earlier.&lt;/li&gt;
&lt;li&gt;Script: Choose the “Launch EC2 for Daily Dev Testing”&lt;/li&gt;
&lt;li&gt;Variable Group (optional): Select the variable group you created earlier.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36x8ajwjfp0ammtb63fo.jpg" alt="Screenshot of creating a new runlog" width="800" height="423"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute the Script&lt;/strong&gt; : Click on &lt;strong&gt;Run Now&lt;/strong&gt; to start the execution.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazsifk3aoww3b6cp8nw4.jpg" alt="Screenshot of the output automation script" width="800" height="416"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudRay will automatically connect to your server, run the &lt;code&gt;Launch EC2 for Daily Dev Testing&lt;/code&gt;, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.&lt;/p&gt;

&lt;p&gt;You can Verify the instance also from the AWS console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0gyi3nh41gmlznp3rkx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0gyi3nh41gmlznp3rkx.jpg" alt="Screenshot of the output automation script" width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the Script on a Schedule with CloudRay
&lt;/h3&gt;

&lt;p&gt;CloudRay also offers &lt;a href="https://cloudray.io/docs/schedules" rel="noopener noreferrer"&gt;Schedules&lt;/a&gt;, allowing you to execute scripts automatically at specific intervals or times.&lt;/p&gt;

&lt;p&gt;To execute this script daily at 8:00 AM without manual effort:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to Schedules:&lt;/strong&gt; In your CloudRay dashboard, go to the “Schedules” tab.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgwmt8lgq7iyvcg3wyag.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" width="580" height="152"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click “Add Schedule”:&lt;/strong&gt; Start creating a new schedule.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6ptkzrpyb3yhyd1hwzp.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" width="800" height="439"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Submit Schedule:&lt;/strong&gt; Click “Submit” to activate your new schedule.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ggn8mtpc0ybdxcgco5x.jpg" alt="Screenshot of the location of enabled schedule" width="800" height="293"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CloudRay will automatically execute the backup script at the scheduled time, ensuring that your EC2 instance is launched regularly everyday at 8AM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Bash scripting provides a lightweight yet powerful way to automate AWS infrastructure tasks like EC2 provisioning, EBS backups, and performance monitoring. By combining these scripts with CloudRay’s scheduling and central management, you can build reliable, automated workflows without complex tooling. Start with the examples provided, customize them for your needs, and explore more automation possibilities.&lt;/p&gt;

&lt;p&gt;Start today by signing up at &lt;a href="https://app.cloudray.io" rel="noopener noreferrer"&gt;https://app.cloudray.io&lt;/a&gt; and managed your bash scripts in a centralised platform.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating Web App Deployment with Terraform, GitHub, and CloudRay</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Thu, 10 Apr 2025 10:50:50 +0000</pubDate>
      <link>https://forem.com/cloudray/automating-web-app-deployment-with-terraform-github-and-cloudray-1noj</link>
      <guid>https://forem.com/cloudray/automating-web-app-deployment-with-terraform-github-and-cloudray-1noj</guid>
      <description>&lt;p&gt;&lt;a href="https://app.cloudray.io/" rel="noopener noreferrer"&gt;CloudRay&lt;/a&gt; makes it easy to automate application deployment across your infrastructure. Instead of juggling manual steps or switching between tools, you can integrate CloudRay with &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; and GitHub to build a seamless deployment pipeline.&lt;/p&gt;

&lt;p&gt;In this tutorial, you will learn how to use Terraform to provision infrastructure on DigitalOcean, and then configure CloudRay to automate the deployment and management of a Node.js application. Additionally, You will also learn how to implement a simple CI/CD pipeline using GitHub Actions and &lt;a href="https://cloudray.io/docs/incoming-webhooks" rel="noopener noreferrer"&gt;CloudRay webhook&lt;/a&gt;, so every push to your repository automatically triggers a deployment.&lt;/p&gt;

&lt;p&gt;By the end, you’ll have a fully automated system that provisions a server, deploys your web app, and keeps it updated with zero manual effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Provisioning Infrastructure with Terraform&lt;/li&gt;
&lt;li&gt;
Configuring CloudRay for Automated Deployments

&lt;ul&gt;
&lt;li&gt;Creating Deployment Script and CloudRay Webhook&lt;/li&gt;
&lt;li&gt;Integrating CloudRay Webhook in Terraform&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Setting Up Continuous Deployment with GitHub and CloudRay&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, ensure you have the following set up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform installed: You can install Terraform by following the &lt;a href="https://developer.hashicorp.com/terraform/install" rel="noopener noreferrer"&gt;official instructions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;CloudRay account: Sign up at &lt;a href="https://app.cloudray.io" rel="noopener noreferrer"&gt;https://app.cloudray.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;DigitalOcean Personal Access Token: Create one via your &lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/" rel="noopener noreferrer"&gt;DigitalOcean control panel&lt;/a&gt;. You will need this to authenticate Terraform with DigitalOcean&lt;/li&gt;
&lt;li&gt;SSH key added to your DigitalOcean account: Create and upload a key using &lt;a href="https://cloudray.io/docs/server-keys" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;. Make note of the name you assign—it will be used in your Terraform configuration&lt;/li&gt;
&lt;li&gt;GitHub repository with a sample Node.js app: For this tutorial, we’ll use a sample Node.js app stored in GitHub. You can fork and &lt;a href="https://github.com/GeoSegun/node-application-cloudray.git" rel="noopener noreferrer"&gt;clone this starter app&lt;/a&gt; or use your own&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Provisioning Infrastructure with Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform lets you define infrastructure as code and supports a wide range of platforms through installable providers. Each provider acts as a bridge between Terraform and the APIs of the service you’re provisioning—like DigitalOcean in our case.&lt;/p&gt;

&lt;p&gt;We will start by using Terraform to provision a virtual machine (droplet) on DigitalOcean. This will serve as the host for our Node.js application.&lt;/p&gt;

&lt;p&gt;First, you create a project directory to house your infrastructure configuration files and navigate into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir infra-cloudray &amp;amp;&amp;amp; cd infra-cloudray
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, before running any terraform command, set the following environment variables to pass in your private SSH key and DigitalOcean API token securely. Replace the token value with your own generated token from the DigitalOcean dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TF_VAR_pvt_key="~/.ssh/id_ed25519"
export TF_VAR_do_token="dop_v1_XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;TIP&lt;/p&gt;

&lt;p&gt;&lt;code&gt;TF_VAR_&lt;/code&gt; prefix allows you to pass environment variables to Terraform as input variables&lt;/p&gt;

&lt;p&gt;Create a file named &lt;code&gt;provider.tf&lt;/code&gt; which stores the configuration of the provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano provider.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add the following configuration into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~&amp;gt; 2.0"
    }
  }
}

provider "digitalocean" {
  token = var.do_token
}

data "digitalocean_ssh_key" "my_key" {
  name = "my_key"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file sets up the Terraform provider configuration, which tells Terraform to use the DigitalOcean plugin and specifies your authentication token. Replace &lt;code&gt;my_key&lt;/code&gt; with the exact name you used when uploading your SSH key to DigitalOcean.&lt;/p&gt;

&lt;p&gt;Create the second file named &lt;code&gt;variable.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following configurations into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "do_token" {
  description = "DigitalOcean API token"
  type = string
  sensitive = true
}

variable "pvt_key" {
  description = "Path to the private SSH key"
  type = string
  default = "~/.ssh/id_ed25519"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file declares input variables, including the DigitalOcean API token and your SSH private key path.&lt;/p&gt;

&lt;p&gt;Finally, create the main infrastructure file named &lt;code&gt;www-cloudray.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano www-cloudray.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, add the following configuration inside the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_droplet" "www-cloudray" {
  image = "ubuntu-24-10-x64"
  name = "www-cloudray"
  region = "nyc3"
  size = "s-1vcpu-1gb"
  ssh_keys = [
    data.digitalocean_ssh_key.my_key.id
  ]

  connection {
    host = self.ipv4_address
    user = "root"
    type = "ssh"
    private_key = file(var.pvt_key)
    timeout = "2m"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the main infrastructure file where we define our server. We use Ubuntu 24.10 and connect via SSH using your uploaded key. Furthermore, the configuration tells terraform to create a droplet named “www-cloudray”, use the SSH key you added to DigitalOcean and automatically connect connect using SSH for further provisioning.&lt;/p&gt;

&lt;p&gt;Now, you initialize the terraform file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75t2m0brzfwjhohogrve.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75t2m0brzfwjhohogrve.jpg" alt="Screenshot of successful initiallisation" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you run the &lt;code&gt;terraform plan&lt;/code&gt; to see your execution plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqgidz5u2ioocgus817m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqgidz5u2ioocgus817m.jpg" alt="Screenshot of terraform plan output" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;+ resource "digitalocean_droplet" "www-cloudray"&lt;/code&gt; shows that terraform will create a droplet resources named &lt;code&gt;www-cloudray&lt;/code&gt;. Then, apply the configuration by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When prompted, type &lt;code&gt;yes&lt;/code&gt; to confirm.&lt;/p&gt;

&lt;p&gt;After deployment, you can inspect the created infrastructure and get the server’s IP address using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform show terraform.tfstate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdqip6g69rcpip3ugkwc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdqip6g69rcpip3ugkwc.jpg" alt="Screenshot of showing server details" width="544" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you can &lt;a href="https://cloudray.io/docs/servers" rel="noopener noreferrer"&gt;add the server to CloudRay&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring CloudRay for Automated Deployments
&lt;/h2&gt;

&lt;p&gt;Now that your DigitalOcean droplet is provisioned and added to CloudRay, let’s automate deployments using CloudRay’s script orchestration. This section covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a deployment script and webhook&lt;/li&gt;
&lt;li&gt;Modifying Terraform to trigger deployment of the application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Deployment Script and CloudRay Webhook
&lt;/h3&gt;

&lt;p&gt;First you create the deployment script by following these steps:&lt;/p&gt;

&lt;p&gt;To create the setup script, you need to follow this steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnm6xmqucb4cl0kuyrek.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnm6xmqucb4cl0kuyrek.jpg" alt="Screenshot of adding a new deployment script" width="754" height="822"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Scripts&lt;/strong&gt; in your CloudRay project&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;New Script&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Name: &lt;code&gt;Deploy App Script&lt;/code&gt;. You can give it any name of your choice&lt;/li&gt;
&lt;li&gt;Copy this code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Update package lists
sudo apt update -y

# Install Nginx
sudo apt update 
sudo apt install -y nginx 

# Exit on error
set -e

# Install Node.js and npm
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs

# Install PM2
sudo npm install -g pm2

# Clean and clone repo
sudo rm -rf "{{app_dir}}"
sudo mkdir -p "{{app_dir}}"
sudo chown -R $USER:$USER "{{app_dir}}"
git clone "{{repo_url}}" "{{app_dir}}"

# Install dependencies
cd "{{app_dir}}/app"
npm install

# Start application
pm2 start server.js --name nodeapp
pm2 save
pm2 startup

sudo bash -c "cat &amp;gt; /etc/nginx/sites-available/nodeapp" &amp;lt;&amp;lt;EOF
server {
    listen 80;
    server_name {{domain}} www.{{domain}};
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade \$http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host \$host;
        proxy_cache_bypass \$http_upgrade;
    }
}
EOF

# Enable config
sudo ln -sf /etc/nginx/sites-available/nodeapp /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx

# SSL with Certbot
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d {{domain}} -d www.{{domain}} --email {{email}} --agree-tos --non-interactive
sudo systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a breakdown of what each command in the &lt;code&gt;Deploy App Script&lt;/code&gt; does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sets up the web server and runtime environment&lt;/li&gt;
&lt;li&gt;Clones your Node.js repository and installs dependencies&lt;/li&gt;
&lt;li&gt;Runs app as a managed background service&lt;/li&gt;
&lt;li&gt;Routes web traffic to your Node.js app&lt;/li&gt;
&lt;li&gt;Automatically provisions Let’s Encrypt SSL certificates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;before using the scripts, you need to define values for the placeholders &lt;code&gt;{{app_dir}}&lt;/code&gt;, &lt;code&gt;{{repo_url}}&lt;/code&gt;, &lt;code&gt;{{domain}}&lt;/code&gt;, and &lt;code&gt;{{email}}&lt;/code&gt;, used in the scrips. CloudRay processes all scripts as &lt;a href="https://shopify.github.io/liquid/" rel="noopener noreferrer"&gt;Liquid templates&lt;/a&gt;. This allows you to use variables dynamically across different servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sg1z23c3g0l7wndlgd3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sg1z23c3g0l7wndlgd3.jpg" alt="Screenshot of adding a new variable group" width="780" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to Variable Groups:&lt;/strong&gt; In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a new Variable Group:&lt;/strong&gt; Click on “Variable Group”.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add the following variables:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;{{app_dir}}&lt;/code&gt;:&lt;/strong&gt; This is the application install path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;{{repo_url}}&lt;/code&gt;:&lt;/strong&gt; This is the GitHub repository URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;domain&lt;/code&gt;:&lt;/strong&gt; The registered domain name for SSL certificate configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;email&lt;/code&gt;:&lt;/strong&gt; The email address associated with the SSL certificate (used for renewal alerts)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since the script and variables are setup, proceed to creating a Webhook for the deployment script.&lt;/p&gt;

&lt;p&gt;To create an &lt;a href="https://cloudray.io/docs/incoming-webhooks" rel="noopener noreferrer"&gt;Incoming Webhook&lt;/a&gt; in CloudRay follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In CloudRay navigate and click on “Incoming Webhooks”&lt;/li&gt;
&lt;li&gt;Click on “New Incoming Webhook” and fill in the detials to create the Webhook
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot0v5v922f3cjpdwqktg.jpg" alt="Screenshot of creating a webhook" width="762" height="612"&gt;
&lt;/li&gt;
&lt;li&gt;Click “Create New Webhook”. This will generate a unique URL for your webhook
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot0v5v922f3cjpdwqktg.jpg" alt="Screenshot of created webhook" width="762" height="612"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that your deployment script and CloudRay webhook are ready, the final step is to make sure Terraform can trigger that webhook&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating CloudRay Webhook in Terraform
&lt;/h3&gt;

&lt;p&gt;We can update our Terraform configuration to trigger the CloudRay webhook after the droplet is successfully provisioned. Instead of using the &lt;code&gt;remote-exec&lt;/code&gt; provisioner, we use a &lt;code&gt;null_resource&lt;/code&gt; block with a &lt;code&gt;local-exec&lt;/code&gt; provisioner, which runs the webhook call locally from the machine running Terraform.&lt;/p&gt;

&lt;p&gt;Update your &lt;code&gt;www-cloudray.tf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano www-cloudray.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following &lt;code&gt;null_resource&lt;/code&gt; configuration outside the droplet block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
resource "null_resource" "trigger_cloudray_webhook" {
  provisioner "local-exec" {
    command = &amp;lt;&amp;lt;EOT
      curl -X POST \
        https://api.cloudray.io/w/58b3b37d-a77c-4b22-a6bc-ef3e2af8af34 \
        -H "Content-Type: application/json" \
        -d '{"test": false}'
    EOT
  }

  depends_on = [digitalocean_droplet.www-cloudray]
}
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Terraform to trigger the CloudRay webhook only after the &lt;code&gt;www-cloudray&lt;/code&gt; droplet has been successfully created.&lt;/p&gt;

&lt;p&gt;Your updated &lt;code&gt;www-cloudray.tf&lt;/code&gt; file should now look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2a0o4yi41jw5483om57.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2a0o4yi41jw5483om57.jpg" alt="Screenshot of new wwww-cloudray.tf file" width="630" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach ensures that the webhook is triggered reliably during the provisioning process without the need for SSH access into the droplet.&lt;/p&gt;

&lt;p&gt;Before applying your changes, reinitialize your Terraform project and upgrade any modules or providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init -upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once terraform apply completes successfully, head over to your &lt;a href="https://cloudray.io/docs/runlogs" rel="noopener noreferrer"&gt;CloudRay Runlog&lt;/a&gt; to confirm that the webhook was received and the deployment job ran successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ly0lgyi597lmj9v67gd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ly0lgyi597lmj9v67gd.jpg" alt="Screenshot of successful Runlog" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open your browser and navigate to your configured domain (e.g., &lt;a href="https://www.mydomain.com" rel="noopener noreferrer"&gt;https://www.mydomain.com&lt;/a&gt;). You should now see your application live and running on the newly provisioned droplet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy6pjohlydgoxdg6jqwr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy6pjohlydgoxdg6jqwr.jpg" alt="Screenshot of successful Runlog" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Continuous Deployment with GitHub and CloudRay
&lt;/h2&gt;

&lt;p&gt;To automate your deployment pipeline, we’ll integrate GitHub Actions with CloudRay so that any push to the &lt;code&gt;main&lt;/code&gt; branch triggers a webhook, which then updates your server with the latest code.&lt;/p&gt;

&lt;p&gt;First, let’s create the script on CloudRay to update always update the application. You can follow similar process as the above and use this code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsyt0u7kdlcdyuo2tu3m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsyt0u7kdlcdyuo2tu3m.jpg" alt="Screenshot of setup CI" width="706" height="826"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e # Exit immediately if any command fails

# Navigate to project root
cd /var/www/nodeapp

echo "➡️ Pulling latest changes..."
git pull origin main

# Move to app directory
cd app

echo "📦 Installing dependencies..."
npm install

echo "🔄 Restarting application..."
pm2 restart nodeapp || (pm2 delete nodeapp &amp;amp;&amp;amp; pm2 start server.js --name nodeapp)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we also create a Webhook following similar steps as discussed earlier&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft977lpxlv4fdm7iqgf6m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft977lpxlv4fdm7iqgf6m.jpg" alt="Screenshot of setup CI" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then go to the project repository (the node.js application), create a directory and file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p .github/workflows
touch .github/workflows/deploy.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add the following content to &lt;code&gt;.github/workflows/deploy.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy to CloudRay

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Trigger CloudRay Webhook
        env:
          WEBHOOK_URL: ${{ secrets.CLOUDRAY_WEBHOOK }}
        run: |
          curl -X POST "$WEBHOOK_URL" \
            -H "Content-Type: application/json" \
            -d '{"test": false}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To add your CloudRay webhook to GitHub secrets, follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlv7zpyugpwlcje37q7v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlv7zpyugpwlcje37q7v.jpg" alt="Screenshot of first addition of GitHub secret step" width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your GitHub repo Settings → Secrets → Actions&lt;/li&gt;
&lt;li&gt;Click “New repository secret”
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkfph5zpm0ycg0vmvpi2.jpg" alt="Screenshot of first addition of GitHub secret step" width="800" height="359"&gt;
&lt;/li&gt;
&lt;li&gt;Name: CLOUDRAY_WEBHOOK&lt;/li&gt;
&lt;li&gt;Value: Paste your CloudRay webhook URL&lt;/li&gt;
&lt;li&gt;Click “Add secret”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, let’s test the pipeline, make a changes on the application (modify the &lt;code&gt;index.html&lt;/code&gt; file):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;meta charset="UTF-8"&amp;gt;
    &amp;lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&amp;gt;
    &amp;lt;title&amp;gt;Welcome&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
    &amp;lt;h1&amp;gt;Greetings from CloudRay&amp;lt;/h1&amp;gt;
    &amp;lt;p&amp;gt;This update was deployed automatically via GitHub → CloudRay 🎉&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Commit and push to main:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "Test CI/CD pipeline"
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch the process, GitHub Actions will show the workflow running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8pivhtf6kqktvp1vuj6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8pivhtf6kqktvp1vuj6.jpg" alt="Screenshot of first addition of GitHub secret step" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, CloudRay will display the script execution in Run Logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0lu4v12kqje9c5f0hd2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0lu4v12kqje9c5f0hd2.jpg" alt="Screenshot of first addition of GitHub secret step" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, visit your domain to see changes live within seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmin8dgpbyp26sor4xpi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmin8dgpbyp26sor4xpi.jpg" alt="Screenshot of first addition of GitHub secret step" width="694" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By following this guide, you’ve successfully built a complete infrastructure automation and CI/CD pipeline that combines Terraform, DigitalOcean, CloudRay, and GitHub Actions.&lt;/p&gt;

&lt;p&gt;The entire process from server creation to application updates now happens automatically whenever you push code changes, giving you more time to focus on development rather than deployment tasks.&lt;/p&gt;

&lt;p&gt;Ready to streamline your own deployment workflow? &lt;a href="https://app.cloudray.io" rel="noopener noreferrer"&gt;Sign up for CloudRay today&lt;/a&gt; and experience the power of infrastructure automation firsthand.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.cloudray.io/f/auth/sign-up" rel="noopener noreferrer"&gt;Get Started with CloudRay&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploy a Laravel Application using CloudRay</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Thu, 03 Apr 2025 12:34:35 +0000</pubDate>
      <link>https://forem.com/cloudray/deploy-a-laravel-application-using-cloudray-2f85</link>
      <guid>https://forem.com/cloudray/deploy-a-laravel-application-using-cloudray-2f85</guid>
      <description>&lt;p&gt;&lt;a href="https://app.cloudray.io/" rel="noopener noreferrer"&gt;CloudRay&lt;/a&gt; simplifies infrastructure deployment through automation, making it an ideal choice for managing Laravel applications with Caddy. It automates the entire deployment process, reducing manual effort and ensuring a seamless, repeatable setup.&lt;/p&gt;

&lt;p&gt;In this guide, you will learn the process of deploying a Laravel application with Caddy using CloudRay. You will learn how to create a detailed automation script for setting up the system, installing dependencies, deploying Laravel, and configuring Caddy as the web server. Caddy simplifies the process by automatically handling HTTPS (SSL/TLS) with Let’s Encrypt, ensuring your application is secure by default.&lt;/p&gt;

&lt;p&gt;By the end of this guide, you will have a fully functional Laravel application hosted on an optimised server environment with automatic SSL support provided by Caddy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Assumptions&lt;/li&gt;
&lt;li&gt;
Create the Automation Script

&lt;ul&gt;
&lt;li&gt;System Setup Script&lt;/li&gt;
&lt;li&gt;Install Composer and Database Setup Script&lt;/li&gt;
&lt;li&gt;Laravel Deployment Script&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Create a Variable Group&lt;/li&gt;

&lt;li&gt;Running the Script with CloudRay&lt;/li&gt;

&lt;li&gt;Troubleshooting&lt;/li&gt;

&lt;li&gt;Related Guides&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before getting started, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A CloudRay account&lt;/strong&gt; at &lt;a href="https://app.cloudray.io/" rel="noopener noreferrer"&gt;https://app.cloudray.io/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A cloud server accessible via SSH:&lt;/strong&gt; If you don’t already have a cloud server, you can get one from popular providers like &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;, &lt;a href="https://www.digitalocean.com/" rel="noopener noreferrer"&gt;DigitalOcean&lt;/a&gt;, and &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt;. Ensure the server has at least 4GB of RAM, and a minimum of 20GB SSD&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH credentials:&lt;/strong&gt; Ensure you have access to the necessary SSH keys or login credentials to access your server. If you don’t have an SSH key set up, follow &lt;a href="https://dev.to/docs/server-keys"&gt;this guide&lt;/a&gt; to create one and add it to CloudRay&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The servers are added to CloudRay:&lt;/strong&gt; Before proceeding, make sure your server is connected to CloudRay. If you haven’t done this yet, follow &lt;a href="https://cloudray.io/docs/servers" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; to add and manage your server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NOTE&lt;/p&gt;

&lt;p&gt;This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific deployments needs and environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This guide assumes you’re using &lt;strong&gt;Rocky Linux 9&lt;/strong&gt; as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create the Automation Script
&lt;/h2&gt;

&lt;p&gt;To streamline the deployment process, you can use three automation scripts&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;System Setup Script:&lt;/strong&gt; Installs all the Lavarel application and systems dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install Composer and Database Setup Script:&lt;/strong&gt; Installs composer and setup the Laravel database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Laravel Deployment Script:&lt;/strong&gt; automates cloning, configuring, and deploying a Laravel app with Caddy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s begin with the System Setup Script&lt;/p&gt;

&lt;h3&gt;
  
  
  System Setup Script
&lt;/h3&gt;

&lt;p&gt;To create the System Setup Script, you need to follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k52spzdltm1hs8h9nia.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k52spzdltm1hs8h9nia.jpg" alt="Screenshot of adding a new system setup script" width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Scripts&lt;/strong&gt; in your CloudRay project&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;New Script&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Name: &lt;code&gt;System Setup Script&lt;/code&gt;. You can give it any name of your choice&lt;/li&gt;
&lt;li&gt;Copy this code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# exit on error
set -e

# Update system
sudo dnf update -y

# Install required software
sudo dnf install -y mariadb-server php php-fpm php-common php-xml php-mbstring php-json php-zip php-mysqlnd curl unzip nano

# Start and enable services
sudo systemctl start mariadb php-fpm
sudo systemctl enable mariadb php-fpm

# Configure PHP-FPM
sudo sed -i 's/^;listen.owner =.*/listen.owner = www-data/' /etc/php-fpm.d/www.conf
sudo sed -i 's/^;listen.group =.*/listen.group = www-data/' /etc/php-fpm.d/www.conf

sudo systemctl restart php-fpm

# Set SELinux to permissive mode
sudo setenforce 0

# Add /usr/local/bin to secure path in sudoers
sudo sed -i '/Defaults\s\+secure_path = /s|\(.*\)|\1:/usr/local/bin|' /etc/sudoers

echo "System setup complete!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a breakdown of what each command in the &lt;code&gt;System Setup Script&lt;/code&gt; does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updates the system to the latest packages&lt;/li&gt;
&lt;li&gt;Installs MariaDB, PHP, and dependencies required for Laravel&lt;/li&gt;
&lt;li&gt;Starts and enables services to launch on boot&lt;/li&gt;
&lt;li&gt;Configures PHP-FPM to use www-data as the owner&lt;/li&gt;
&lt;li&gt;Configures SELinux to be permissive (optional, for easier troubleshooting)&lt;/li&gt;
&lt;li&gt;Adds &lt;code&gt;/usr/local/bin&lt;/code&gt; to the secure path for sudo commands&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Install Composer and Database Setup Script
&lt;/h3&gt;

&lt;p&gt;Next, you need to install and setup the database for the Laravel application. To do so, follow similar steps as the above:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop7s9dc27x1q4xw3iixd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop7s9dc27x1q4xw3iixd.jpg" alt="Screenshot of installing composer and database setup" width="708" height="822"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Scripts&lt;/strong&gt; &amp;gt; &lt;strong&gt;New Script&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Name: &lt;code&gt;Install Composer and Database Setup Script&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Exit on error
set -e

# Install Composer
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
sudo chmod +x /usr/local/bin/composer

# Verify Composer installation
composer --version

# Configure MySQL database
sudo mysql &amp;lt;&amp;lt;EOF
CREATE DATABASE {{db_name}};
CREATE USER '{{db_user}}'@'localhost' IDENTIFIED BY '{{db_pass}}';
GRANT ALL PRIVILEGES ON {{db_name}}.* TO '{{db_user}}'@'localhost';
FLUSH PRIVILEGES;
EOF

echo "Composer and MySQL setup complete!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what the &lt;code&gt;Install Composer and Database Setup Script&lt;/code&gt; does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installs Composer, a dependency manager for PHP&lt;/li&gt;
&lt;li&gt;Verifies Composer installation to ensure it’s available system-wide&lt;/li&gt;
&lt;li&gt;Configures MariaDB database by creating a database, user, and granting permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Laravel Deployment Script
&lt;/h3&gt;

&lt;p&gt;The final script automates the cloning, configuration, and deployment of your Laravel application with Caddy. This script will handle the deployment process, ensuring your application is ready to serve traffic.&lt;/p&gt;

&lt;p&gt;To create the Laravel Deployment Script, follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1mczz3jv4t5sb2garhf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1mczz3jv4t5sb2garhf.jpg" alt="Screenshot of deploying Laravel application" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Scripts&lt;/strong&gt; &amp;gt; &lt;strong&gt;New Script&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Name: &lt;code&gt;Laravel Deployment Script&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Exit on error
set -e

# install git
sudo dnf install git -y

# Install Caddy
sudo dnf install 'dnf-command(copr)' -y
sudo dnf copr enable @caddy/caddy -y
sudo dnf install caddy -y

# Start and enable Caddy
sudo systemctl start caddy
sudo systemctl enable caddy

# Clone Laravel project from GitHub
if [! -d "/var/www/html/{{repo_name}}"]; then
    echo "Cloning Laravel repository from GitHub..."
    sudo git clone https://{{github_access_token}}@github.com/{{github_user}}/{{repo_name}}.git /var/www/html/{{repo_name}}
else
    echo "Repository already exists. Fetching latest changes..."
    cd /var/www/html/{{repo_name}}
    sudo git fetch --all
    echo "Resetting to the latest version of the main branch..."
    sudo git reset --hard origin/main
fi

# Set correct permissions
sudo chown -R www-data:www-data /var/www/html/{{repo_name}}/storage
sudo chown -R www-data:www-data /var/www/html/{{repo_name}}/bootstrap/cache
sudo chmod -R 775 /var/www/html/{{repo_name}}/storage
sudo chmod -R 775 /var/www/html/{{repo_name}}/bootstrap/cache

# Update Laravel environment file
if [-f "/var/www/html/{{repo_name}}/.env"]; then
    echo "Updating existing .env file..."
    # Update existing variables
    sudo sed -i "s|^APP_URL=.*|APP_URL={{domain}}|" /var/www/html/{{repo_name}}/.env
    sudo sed -i "s|^DB_CONNECTION=.*|DB_CONNECTION=mysql|" /var/www/html/{{repo_name}}/.env
    sudo sed -i "s|^DB_HOST=.*|DB_HOST=127.0.0.1|" /var/www/html/{{repo_name}}/.env
    sudo sed -i "s|^DB_PORT=.*|DB_PORT=3306|" /var/www/html/{{repo_name}}/.env
    sudo sed -i "s|^DB_DATABASE=.*|DB_DATABASE={{db_name}}|" /var/www/html/{{repo_name}}/.env
    sudo sed -i "s|^DB_USERNAME=.*|DB_USERNAME={{db_user}}|" /var/www/html/{{repo_name}}/.env
    sudo sed -i "s|^DB_PASSWORD=.*|DB_PASSWORD={{db_pass}}|" /var/www/html/{{repo_name}}/.env
else
    echo "Creating new .env file..."
    cat &amp;lt;&amp;lt;EOL | sudo tee /var/www/html/{{repo_name}}/.env
APP_URL={{domain}}

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE={{db_name}}
DB_USERNAME={{db_user}}
DB_PASSWORD={{db_pass}}
EOL
fi

# Run Laravel setup commands
cd /var/www/html/{{repo_name}}
sudo php artisan key:generate
sudo php artisan migrate

# Configure Caddy
cat &amp;lt;&amp;lt;EOL | sudo tee /etc/caddy/Caddyfile
{{domain}} {
    root * /var/www/html/{{repo_name}}/public # Serve files from the Laravel public directory
    php_fastcgi unix//run/php-fpm/www.sock # Pass PHP requests to PHP-FPM
    file_server # Serve static files
}
EOL

# Start and enable Caddy
sudo systemctl start caddy
sudo systemctl enable caddy

# Install Firewall &amp;amp; Configure Rules
sudo dnf install -y firewalld
sudo systemctl start firewalld
sudo systemctl enable firewalld
sudo firewall-cmd --zone=public --permanent --add-service=http
sudo firewall-cmd --zone=public --permanent --add-service=https
sudo firewall-cmd --reload

echo "Laravel deployment and SSL setup complete!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what the &lt;code&gt;Laravel Deployment Script&lt;/code&gt; does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installs Git for repository cloning&lt;/li&gt;
&lt;li&gt;Defines the GitHub repository and target directory&lt;/li&gt;
&lt;li&gt;Clones the Laravel project or updates it if it already exists&lt;/li&gt;
&lt;li&gt;Sets proper permissions for storage and cache directories&lt;/li&gt;
&lt;li&gt;Runs Laravel migration and key generation to prepare the application&lt;/li&gt;
&lt;li&gt;Deploy the application with Caddy&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create a Variable Group
&lt;/h2&gt;

&lt;p&gt;Now, before running the scripts, you need to define values for the placeholders &lt;code&gt;{{db_name}}&lt;/code&gt;, &lt;code&gt;{{db_user}}&lt;/code&gt;, &lt;code&gt;{{db_pass}}&lt;/code&gt;, &lt;code&gt;{{github_access_token}}&lt;/code&gt;, &lt;code&gt;{{github_user}}&lt;/code&gt;, &lt;code&gt;{{repo_name}}&lt;/code&gt;, and &lt;code&gt;{{domain}}&lt;/code&gt; used in the scrips. CloudRay processes all scripts as &lt;a href="https://shopify.github.io/liquid/" rel="noopener noreferrer"&gt;Liquid templates&lt;/a&gt;. This allows you to use variables dynamically across different servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froyvhxwpkyf8m7kvzxtv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froyvhxwpkyf8m7kvzxtv.jpg" alt="Screenshot of adding a new variable group" width="746" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to Variable Groups:&lt;/strong&gt; In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a new Variable Group:&lt;/strong&gt; Click on “Variable Group”.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add the following variables:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;db_name&lt;/code&gt;:&lt;/strong&gt; This is the name database for the Laravel application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;db_user&lt;/code&gt;:&lt;/strong&gt; Database user for the Laravel application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;db_pass&lt;/code&gt;:&lt;/strong&gt; Password for the database user&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;github_access_token&lt;/code&gt;:&lt;/strong&gt; This is your GitHub personal access token for cloning the repository&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;github_user&lt;/code&gt;:&lt;/strong&gt; This is your GitHub Username&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;repo_name&lt;/code&gt;:&lt;/strong&gt; This is the name of the GitHub repository&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;domain&lt;/code&gt;:&lt;/strong&gt; Domain name for the Laravel application e.g., &lt;code&gt;myapp.com&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since the variables are setup, proceed to run the scripts with CloudRay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Script with CloudRay
&lt;/h2&gt;

&lt;p&gt;Now that everything is setup, you can use CloudRay to automate the deployment of your Laravel Application&lt;/p&gt;

&lt;p&gt;You can choose to run the scripts individually or execute them all at once using &lt;a href="https://cloudray.io/docs/script-playlists" rel="noopener noreferrer"&gt;CloudRay’s Script Playlists&lt;/a&gt;. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.&lt;/p&gt;

&lt;p&gt;Here are the steps to follow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to “Script Playlists”:&lt;/strong&gt; Click on the Scripts tab in the CloudRay interface
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrlryxkjcc4ct3xpud39.jpg" alt="Locate the script playlist in CloudRay interface" width="512" height="186"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click “Add Script Playlist”:&lt;/strong&gt; This initiates the creation of a new playlist&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provide a Name:&lt;/strong&gt; Give your playlist a unique name (For example “Automate Deployment and Management of Laravel Application”)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add Scripts in Order:&lt;/strong&gt; Select and add the scripts sequentially
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lflj9s5wt2i7vedav32.jpg" alt="Locate the script playlist in CloudRay interface" width="600" height="498"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Save the Playlist:&lt;/strong&gt; Click “create playlist” to store your new playlist.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once your script playlist is created, proceed with execution:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to Runlogs&lt;/strong&gt; : In your CloudRay project, go to the Runlogs section in the top menu.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a New Runlog&lt;/strong&gt; : Click on New Runlog&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure the Runlog&lt;/strong&gt; : Provide the necessary details:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb04qdcyzpgtecsakvwz.jpg" alt="Screenshot of creating a new runlog" width="638" height="432"&gt;
&lt;/li&gt;
&lt;li&gt;Server: Select the server where your Laravel application will be installed&lt;/li&gt;
&lt;li&gt;Script Playlist: Choose the playlist you created (For example “Automate Deployment and Management of Laravel Application”)&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Variable Group: Select the variable group you set up earlier&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Execute the Script&lt;/strong&gt; : Click on &lt;strong&gt;Run Now&lt;/strong&gt; to start the execution&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsvm54mgxijxct0xqrh8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsvm54mgxijxct0xqrh8.jpg" alt="Screenshot of the result of all the script from the script playlist" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your Laravel Application is now seamlessly deployed and managed with CloudRay. That’s it! Happy deploying!. You can access it by visiting &lt;code&gt;http://myapp.com&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;If you encounter issues during deployment, consider the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Caddy Fails to Start:&lt;/strong&gt; Check the Caddy service status with &lt;code&gt;sudo systemctl status caddy&lt;/code&gt; and restart it using &lt;code&gt;sudo systemctl restart caddy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PHP-FPM Not Working:&lt;/strong&gt; Ensure PHP-FPM is running with &lt;code&gt;sudo systemctl status php-fpm&lt;/code&gt; and restart it using &lt;code&gt;sudo systemctl restart php-fpm&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSL Certificate Not Issued:&lt;/strong&gt; Verify your domain’s DNS records and ensure ports 80 and 443 are open in the firewall.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Laravel Application Not Loading:&lt;/strong&gt; Check the &lt;code&gt;.env&lt;/code&gt; file for correct database credentials and ensure the storage and &lt;code&gt;bootstrap/cache&lt;/code&gt; directories have the correct permissions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Connection Issues:&lt;/strong&gt; Verify the database credentials in the &lt;code&gt;.env&lt;/code&gt; file and ensure MariaDB is running with &lt;code&gt;sudo systemctl status mariadb&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the issue persists, consult the &lt;a href="https://caddyserver.com/docs/" rel="noopener noreferrer"&gt;Caddy Documentation&lt;/a&gt; or the &lt;a href="https://laravel.com/docs/12.x/readme" rel="noopener noreferrer"&gt;Laravel Documentation&lt;/a&gt; for further assistance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Guides
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/articles/deploy-express"&gt;Deploy Node &amp;amp; Express&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/articles/deploy-ruby-on-rails"&gt;Deploy Ruby on rails app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/articles/deploy-phpmyadmin"&gt;How to Deploy phpMyAdmin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/articles/deploy-jenkins-with-docker-compose"&gt;Deploy Jenkins with Docker Compose&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://app.cloudray.io/f/auth/sign-up" rel="noopener noreferrer"&gt;Get Started with CloudRay&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Best Way to Install Docker on Kali Linux</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Wed, 29 Nov 2023 20:34:40 +0000</pubDate>
      <link>https://forem.com/geosegun/best-way-to-install-docker-on-kali-linux-3103</link>
      <guid>https://forem.com/geosegun/best-way-to-install-docker-on-kali-linux-3103</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1v6ivii3ea4dpra5xzla.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1v6ivii3ea4dpra5xzla.jpg" alt="output" width="427" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;Are you finding it difficult to install Docker on your Linux machine? &lt;br&gt;
Docker is a powerful platform that enables developers to build, ship, and run applications in containers. Installing Docker on Kali can be tasking. Running docker Linux is a straightforward process, and in this guide, we'll walk you through each step, explaining the command line scripts along the way. Let's get started.&lt;/p&gt;
&lt;h2&gt;
  
  
  PREREQUISITES
&lt;/h2&gt;

&lt;p&gt;Before installing docker. it is important to get have the following prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A kali Linux Machine (ensure it is up-to-date)&lt;/li&gt;
&lt;li&gt;Access to the terminal with sudo privileges&lt;/li&gt;
&lt;li&gt;Basic understanding of command line.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To install docker in your kali linux machine, follow these steps:&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Update packages List
&lt;/h2&gt;

&lt;p&gt;To start the installation of docker in your machine, it is important to update your package lists. To do this, open your terminal and run the following command to ensure that your package lists are updated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mjbxg30udew5bs6s1uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mjbxg30udew5bs6s1uk.png" alt="output" width="682" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install Docker
&lt;/h2&gt;

&lt;p&gt;Once the package lists are successfully installed, install Docker using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install docker.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gtzbzza38mt1mtmo1zy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gtzbzza38mt1mtmo1zy.png" alt="output" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Enable and Start Docker Services
&lt;/h2&gt;

&lt;p&gt;Docker requires a service to be running in the background. This steps enables and start the docker service with the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable docker --now
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybxhwlmwrge1mhf8y612.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybxhwlmwrge1mhf8y612.png" alt="output" width="800" height="75"&gt;&lt;/a&gt;&lt;br&gt;
This command not only enables Docker to start on boot but also starts the service immediately.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Check Docker Service Status
&lt;/h2&gt;

&lt;p&gt;To ensure docker is up and running, it is important to check the status. Use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbyd7oliaj3kvri4qnea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbyd7oliaj3kvri4qnea.png" alt="output" width="800" height="396"&gt;&lt;/a&gt;&lt;br&gt;
This command provides detailed information about the Docker service, including its current status. Look for &lt;strong&gt;Active&lt;/strong&gt; to confirm that Docker is running.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Add Your User to the Docker Group
&lt;/h2&gt;

&lt;p&gt;This step is required to avoid &lt;code&gt;sudo&lt;/code&gt; each time you want to use Docker, add your user to the Docker group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step grants your user the necessary permissions to interact with the Docker daemon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Restart Your System
&lt;/h2&gt;

&lt;p&gt;To apply the changes made by adding your user to the Docker group, restart your system. You can do this by signing out and signing back in or using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Verify Docker Installation
&lt;/h2&gt;

&lt;p&gt;Confirm that Docker is successfully installed by checking its version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ck15zhyk2lbmvpjixi7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ck15zhyk2lbmvpjixi7.png" alt="output" width="386" height="132"&gt;&lt;/a&gt;&lt;br&gt;
This command displays the installed Docker version, confirming that the installation was successful.&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Congratulations on successfully navigating through the installation of Docker on your Kali Linux machine! Docker brings a new level of flexibility and efficiency to your development and deployment workflows. With containers, you can ensure consistency across different environments, making your life as a developer much smoother. Feel free to explore Docker's vast capabilities and revolutionize the way you package and deploy applications.&lt;br&gt;
Happy coding!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>docker</category>
      <category>linux</category>
    </item>
    <item>
      <title>A Beginner's Guide: Installing MongoDB on Ubuntu 🔥 in 5 Simple Steps 🚀</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Wed, 13 Sep 2023 21:04:28 +0000</pubDate>
      <link>https://forem.com/geosegun/a-beginners-guide-installing-mongodb-on-ubuntu-in-5-simple-steps-1n7f</link>
      <guid>https://forem.com/geosegun/a-beginners-guide-installing-mongodb-on-ubuntu-in-5-simple-steps-1n7f</guid>
      <description>&lt;p&gt;Looking for a Hassle-Free MongoDB Installation on Ubuntu? Your search ends here! Follow this comprehensive step-by-step guide to effortlessly set up your MongoDB database on any Ubuntu or Linux-based system.&lt;/p&gt;

&lt;p&gt;By the end of this article, you’ll have mastered installing MongoDB on Ubuntu effortlessly. This comprehensive guide not only offers a step-by-step walkthrough but also delves into the tools and techniques, providing valuable insights for seamless execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To successfully install MongoDB on your Linux-based system, the following must be done:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;knowledge of MongoDB&lt;/li&gt;
&lt;li&gt;A general knowledge of working with command line/shell commands&lt;/li&gt;
&lt;li&gt;Ubuntu or other Linux-based operating systems on the host workstation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Steps to Install MongoDB on Ubuntu
&lt;/h2&gt;

&lt;p&gt;MongoDB installation on Ubuntu is a simple process that allows you to set up a powerful and versatile NoSQL database on your system. By following a few simple steps, you can have MongoDB up and running in no time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Import MongoDB Repositories
&lt;/h3&gt;

&lt;p&gt;It is important to ensure the legitimacy and integrity of the packages when installing MongoDB on Ubuntu. Ubuntu's Package Management system uses GPG keys to validate package signatures, adding an extra layer of security. To begin the MongoDB installation, you must first import the MongoDB Public GPG key into your Ubuntu system. The MongoDB Public GPG key can be imported with the following terminal command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a Source list for your MongoDB installation next. To accomplish this, use the following command to create the &lt;code&gt;"/etc/apt/sources.list.d/mongodb-org-3.4.list"&lt;/code&gt; list file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With your list file now created, you can install the Local Package repository. To do this, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Installing MongoDB Packages
&lt;/h3&gt;

&lt;p&gt;You now need to install the latest stable version of MongoDB on your system. Use the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you wish to install a certain version of MongoDB on your system, you must specify the version for each component package when you install it. For example, installing a specific version. run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y mongodb-org=3.4 mongodb-org-server=3.4 mongodb-org-shell=3.4 mongodb-org-mongos=3.4 mongodb-org-tools=3.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Launching MongoDB as a Service on Ubuntu
&lt;/h3&gt;

&lt;p&gt;Now that MongoDB is up and running, you must build a Unit file to assist your system in understanding the resource management process. For example, the most widely used Unit file determines how to start, stop, or manage a service automatically.&lt;br&gt;
To do this, you can create a configuration file, &lt;code&gt;“mongodb.service in /etc/systemd/system”&lt;/code&gt;, that will help manage the MongoDB system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/systemd/system/mongodb.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, copy the following information in your configuration file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt1kpg20uyzhi5l5mauc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt1kpg20uyzhi5l5mauc.png" alt="config file" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the configuration file has been produced, use the following command to update the system service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, start the updated systemd service for your MongoDB instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the instance is running, check to see if MongoDB launched on port 27017. To accomplish this, use the "netstat" command as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;netstat -plntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm if your MongoDB instance started successfully, you need to use the status command as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can now enable auto-start functionality for your system as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For an instance you want to start or restart the MongoDB instance running on your Ubuntu installation, run the below commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl stop mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Configuring and Connecting MongoDB
&lt;/h3&gt;

&lt;p&gt;From the above steps, we have successfully installed MongoDB service. This step shows how to install MongoDB. To do this, open the Mongo Shell and switch to the database admin mode using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongo
use admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a root user for your MongoDB installation and exit the Mongo Shell as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.createUser({user:"admin", pwd:”password", roles:[{role:"root", db:"admin"}]})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: you can replace the user and pwd with your preferred choice.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;with this setup, you can now connect with your MongoDB, by first restarting MongoDB and then using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongo -u admin -p admin123 --authenticationDatabase admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you’ll now be able to see MongoDB set up a connection. You can use the “show dbs” command as follows to open a list of all available databases. With all these, you’ve successfully installed MongoDB on Ubuntu.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: MongoDB Tuning
&lt;/h3&gt;

&lt;p&gt;Scaling MongoDB is simple and may be done horizontally or vertically. This is critical for the Database's optimal performance. Horizontal scaling involves the addition of server resources such as RAM and CPUs, whereas vertical scaling involves the addition of servers to the configuration. Several factors influence the performance of the MongoDB Database, including memory usage, the number of concurrent connections, and the WiredTiger Cache, among others. MongoDB's default storage engine is WiredTiger, which saves 50% of RAM. This indicates that 8GB of RAM will have a memory preserver of 0.5*(8-1) for WiredTiger. Use the following command to check use statistics and determine whether modifications are needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jtk6kpwtie4ouz84n4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jtk6kpwtie4ouz84n4i.png" alt="tuning" width="800" height="923"&gt;&lt;/a&gt;&lt;br&gt;
From the above result, some of the key points to note are listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wiredTiger.cache.maximum bytes configure&lt;/li&gt;
&lt;li&gt;wiredTiger.cache.bytes currently in the cache&lt;/li&gt;
&lt;li&gt;wiredTiger.cache.pages read into cache&lt;/li&gt;
&lt;li&gt;wiredTiger.cache.pages written from cache&lt;/li&gt;
&lt;li&gt;wiredTiger.cache.tracked dirty bytes in the cache&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To check the usage of WiredTiger Concurrency Read and Write Ticket, follow the command given below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz9jb0b0i4evgkafbky1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz9jb0b0i4evgkafbky1.png" alt="db" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In just 5 straightforward steps, you've embarked on a journey to harness the power of MongoDB on your Ubuntu system. By following this beginner's guide, you've not only successfully installed MongoDB but also gained valuable insights into managing databases on Linux.&lt;br&gt;
With MongoDB in your toolkit, you're well-equipped to handle diverse data needs and build robust applications. So, go ahead and explore the endless possibilities this NoSQL database offers. Whether you're a developer, a data enthusiast, or a tech enthusiast, MongoDB on Ubuntu is your gateway to efficient data management.&lt;br&gt;
Start your MongoDB adventure today and witness your projects scale and thrive like never before!"&lt;/p&gt;

</description>
      <category>devops</category>
      <category>mongodb</category>
      <category>database</category>
      <category>linux</category>
    </item>
    <item>
      <title>CYBER THREAT ANALYSIS OF STATE SPONSORED CYBER OPERATION: 2005-2022</title>
      <dc:creator>rising_segun</dc:creator>
      <pubDate>Mon, 27 Mar 2023 17:27:10 +0000</pubDate>
      <link>https://forem.com/geosegun/cyber-threat-analysis-of-state-sponsored-cyber-operation-2005-2022-16dn</link>
      <guid>https://forem.com/geosegun/cyber-threat-analysis-of-state-sponsored-cyber-operation-2005-2022-16dn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqde6f5jqvx3tbb5ol5v2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqde6f5jqvx3tbb5ol5v2.jpg" alt="Threat" width="800" height="533"&gt;&lt;/a&gt;&lt;strong&gt;&amp;lt;/&amp;gt; INTRODUCTION&lt;/strong&gt;&lt;br&gt;
As technology continues to advance, cyber-attacks are becoming more sophisticated and have become a significant threat to countries, organizations, and individuals worldwide. State actors often sponsor or carry out these attacks to achieve political, military, or economic objectives.&lt;br&gt;
To better understand the trends and characteristics of sponsored cyber operation incidents from 2005 to 2022, we will analyze a dataset of such incidents. Through this analysis, we aim to provide valuable insights into the nature of these attacks and explore potential machine learning approaches to identify patterns and groups of incidents with similar characteristics. Our findings could help organizations and governments enhance their cybersecurity strategies and protect against future cyber threats.&lt;br&gt;
&lt;strong&gt;&amp;lt;/&amp;gt; DATA DESCRIPTION&lt;/strong&gt;&lt;br&gt;
The data was collected from &lt;a href="https://www.kaggle.com/code/justin2028/state-sponsored-cyber-operations-code-starter" rel="noopener noreferrer"&gt;Kaggle&lt;/a&gt;. Some exploratory data cleaning was done on the data using Microsoft Excel to make the data better for analysis and creation of insights. The updated data is hosted on my &lt;a href="https://github.com/GeoSegun/Data-Analysis-of-Sponsored-Cyber-Operation-from-2005---Present-2022-" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Ultimately, the primary data source was the Council on Foreign Relations, an independent and nonpartisan American think tank specializing in U.S foreign policy and international relations.&lt;br&gt;
&lt;strong&gt;&amp;lt;/&amp;gt; METHODOLOGY AND RESULTS&lt;/strong&gt;&lt;br&gt;
The programming language used in this project is python, with the following libraries: &lt;code&gt;pandas&lt;/code&gt;, &lt;code&gt;numpy&lt;/code&gt;, &lt;code&gt;plotly&lt;/code&gt;, &lt;code&gt;seaborn&lt;/code&gt;, &lt;code&gt;matplotlib&lt;/code&gt;, &lt;code&gt;geopy&lt;/code&gt;, &lt;code&gt;folium&lt;/code&gt; and &lt;code&gt;scikit learn&lt;/code&gt;. The code to this project is &lt;a href="https://github.com/GeoSegun/Data-Analysis-of-Sponsored-Cyber-Operation-from-2005---Present-2022-" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
Before the data was analyzed, the data was cleaned and preprocessed. This involves removing irrelevant columns, handling missing values, and transforming the data into a format suitable for analysis. Also, we perform a Machine Learning algorithm called Latent Dirichlet Allocation (LDA) to identify topics from the description of cyber operation incidents.&lt;br&gt;
First, we visualized the cyber operation incident over time with a linear graph which is represented below (Figure 1.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6828kmf60udg372rjjmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6828kmf60udg372rjjmq.png" alt="_Figure 1: Figure showing the trend of cyber operation incident over the years._" width="800" height="334"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1: Figure showing the trend of cyber operation incident over the years.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The graph shows a clear increase in the number of incidents over time, with a noticeable surge in incidents starting around 2014. From the graph, we can see that the number of cyber operation incidents steadily increased from 2005 to around 2014, with a few spikes in incident counts in the intervening years. However, starting around 2016, the number of incidents began to increase much more rapidly and reached a peak in 2018, with over 60 incidents recorded in that year. This began the rapid increase in cyber operation. This suggests that cyber operations are becoming more prevalent and sophisticated over time.&lt;br&gt;
The target category that has been prevalent since 2005 till 2022. Figure 2 gives more insight into the category.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62ke09vtky4sjxnrzlkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62ke09vtky4sjxnrzlkl.png" alt="Figure 2: chart showing the target of the attack." width="800" height="284"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2: chart showing the target of the attack.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Based on the resulting plot, the government is still the most common target of cyber operations incidents, followed by the private sector and the military . military accounts for a smaller number of incidents in the provided dataset. This underscores the importance of prioritizing cybersecurity measures and training for government and private sectors, as well as civil society.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbsty2j78d2ucob0ef7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbsty2j78d2ucob0ef7f.png" alt="Figure 3: Various attack type carried out by Threat actors." width="800" height="440"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 3: Various attack type carried out by Threat actors.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Figure 3. reveals that the most frequently observed attack type across the years is espionage, accounting for more than half of all recorded incidents perpetrated by threat actors. Notably, this trend aligns with the strategic objectives of many state-sponsored attackers who use covert means to access sensitive intelligence. Moreover, the prevalence of other attack types, such as denial of service, data destruction, financial theft, and sabotage underscores the diverse range of motives and objectives behind sponsored cyber-operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qxyqdzals9pswsc88y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qxyqdzals9pswsc88y9.png" alt="Figure 4: Types of cyber operations incidents over Time" width="800" height="408"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 4: Types of cyber operations incidents over Time&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The plot (Figure 4) shows the trends in the number of cyber operations incidents over time by type. It reveals that the most common types of incidents are espionage and theft, followed by disruption and defacement. The number of incidents of espionage and theft has been consistently higher than other types, indicating that these types are more prevalent and perhaps easier to carry out. The number of incidents of disruption and defacement has also increased over time, possibly due to the growing importance of technology and dependence on digital infrastructure in various sectors. The plot also shows a significant increase in cyber operations incidents starting around 2014, which corresponds to the surge in internet usage and widespread adoption of digital technologies in various sectors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbipvwkjpvg2e9if48ac5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbipvwkjpvg2e9if48ac5.png" alt="Figure 5: Map showing the state sponsored threat actors" width="800" height="373"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 5: Map showing the state sponsored threat actors&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The map generated shows the distribution of Cyber Operations incidents sponsored by various countries. From the map it is seen that most of the operation is coming from Europe and Eastern Asia. A chart was also created to create insight on the various continents of the state sponsored cyber attack was generated from (Figure 6).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7fmk22bqa3yg6nyzued.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7fmk22bqa3yg6nyzued.png" alt="Figure 6: Cyber-attack sponsors by region" width="800" height="494"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 6: Cyber-attack sponsors by region&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The analysis revealed that North America and Europe were the most active regions in sponsoring cyber-attacks, with North America having the highest number of incidents. East Asia and the Middle East were also prominent sponsors of cyber-attacks, while South Asia, Southeast Asia, and Africa had a relatively smaller number of incidents. The result shows that the threat of cyber-attacks is not limited to specific regions or countries, as attackers can operate from anywhere in the world. The findings suggest that cybersecurity measures should be implemented globally to ensure the protection of critical infrastructure, businesses, and individuals.&lt;br&gt;
Since 2005, thirty-four countries have been suspected of sponsoring cyber operations. China, Russia, Iran, and North Korea sponsored 77 percent of all suspected operations. Figure 7. Shows the top 5 sponsors by country of cyber operation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzizzpyf4pmgoozowm4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzizzpyf4pmgoozowm4m.png" alt="Figure 7: Top sponsors by percentage of Incidents" width="800" height="802"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 7: Top sponsors by percentage of Incidents&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Because of the nature of the data, so much insight could not be derived from various machine learning models. Nevertheless, topic model was carried out using Latent Dirichlet Allocation (LDA) on the descriptions of the cyber operations incidents’ dataset. The goal is to identify the main topics that emerge from the data.&lt;br&gt;
First, we clean the text data by removing non-alphabetic characters, converting it to lowercase, and splitting the text into individual words. We then create a document-term matrix using &lt;code&gt;CountVectorizer&lt;/code&gt;, which counts the frequency of each word in each document. Next, we fit the LDA model to the document-term matrix with 5 topics. We print the top 10 words for each topic to gain an understanding of the main themes in the dataset. Finally, we assign each incident to a topic based on the highest probability for that topic and visualize the distribution of incidents across topics using a &lt;code&gt;countplot&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzmk60b8376yxvtkqw7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzmk60b8376yxvtkqw7h.png" alt="Figure 8: Distribution of Cyber Operations Incident Topics" width="800" height="513"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 8: Distribution of Cyber Operations Incident Topics&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The results show that the main topics of cyber operations incidents are:&lt;br&gt;
Topic 0: Malware and hacking attacks.&lt;br&gt;
Topic 1: Data theft and breaches&lt;br&gt;
Topic 2: Espionage and state-sponsored attacks&lt;br&gt;
Topic 3: Financial fraud and theft&lt;br&gt;
Topic 4: Denial of Service attacks and infrastructure disruption&lt;br&gt;
By understanding the main topics of cyber operations incidents, organizations and governments can prioritize their cybersecurity efforts and take appropriate measures to protect against these threats.&lt;br&gt;
&lt;strong&gt;&amp;lt;/&amp;gt; CONCLUSION&lt;/strong&gt;&lt;br&gt;
Overall, the analysis showed that the USA, China, Russia, Iran, and North Korea were the top sponsors of cyber operations between 2005 and 2022, accounting for more than 50% of all incidents in the dataset. These sponsors were also found to be the top sponsors in most of the categories, indicating that they were involved in a wide range of cyber operations. However, it is important to note that the data only includes incidents that have been attributed to a specific sponsor, and many incidents may have gone unattributed or misattributed, making the analysis only a representation of the known incidents.&lt;/p&gt;

&lt;p&gt;click &lt;a href="https://nbviewer.org/github/GeoSegun/Data-Analysis-of-Sponsored-Cyber-Operation-from-2005---Present-2022-/blob/main/Threat%20Analysis.ipynb" rel="noopener noreferrer"&gt;here&lt;/a&gt; for the jupyter notebook&lt;/p&gt;

&lt;p&gt;click &lt;a href="https://github.com/GeoSegun/Data-Analysis-of-Sponsored-Cyber-Operation-from-2005---Present-2022-" rel="noopener noreferrer"&gt;here&lt;/a&gt; for the github repository&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read my article; your attention and support are greatly appreciated. It is my desire to share my ideas and thoughts with you, and I hope you find my content interesting and informative.&lt;/p&gt;

&lt;p&gt;If you enjoyed reading my article, I encourage you to subscribe to receive future updates. By subscribing, you’ll never miss a post and you’ll be the first to know about my latest content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/GeoSegun" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/durojaye-olusegun-7a5023190/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>threatintelligence</category>
    </item>
  </channel>
</rss>
