<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Steve Rastall</title>
    <description>The latest articles on Forem by Steve Rastall (@steve_rastall_303bdea7abe).</description>
    <link>https://forem.com/steve_rastall_303bdea7abe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/steve_rastall_303bdea7abe"/>
    <language>en</language>
    <item>
      <title>Building a “Server in a Box” with Raspberry Pi</title>
      <dc:creator>Steve Rastall</dc:creator>
      <pubDate>Wed, 11 Mar 2026 13:39:41 +0000</pubDate>
      <link>https://forem.com/steve_rastall_303bdea7abe/building-a-server-in-a-box-with-raspberry-pi-56h4</link>
      <guid>https://forem.com/steve_rastall_303bdea7abe/building-a-server-in-a-box-with-raspberry-pi-56h4</guid>
      <description>&lt;p&gt;Over the last year I’ve been experimenting with the idea of a small server-in-a-box built entirely around Raspberry Pi devices. The goal wasn’t to replace large cloud infrastructure, but to create something simple that could run containers locally, survive network outages, and be deployed almost anywhere.&lt;/p&gt;

&lt;p&gt;A lot of organisations now rely heavily on cloud platforms like Amazon Web Services or Microsoft Azure, but there are still many situations where a small local compute cluster makes sense. Remote sites, factories, retail environments, labs, or simply development environments that need to run independently from the internet.&lt;/p&gt;

&lt;p&gt;The idea behind the build was straightforward: take a few Raspberry Pi boards, put them into a small enclosure with networking and power protection, and create a portable cluster capable of running container workloads.&lt;/p&gt;

&lt;p&gt;The hardware itself is surprisingly simple. A typical setup might use four or five Raspberry Pi boards, ideally the Pi 5 or Pi 4 with at least 8GB of RAM. These connect into a small gigabit network switch inside the case. A compact SSD can be used for shared storage, and a small UPS keeps everything running through short power interruptions. Once assembled, the whole unit behaves like a tiny data centre that you can place on a desk or mount in a cupboard.&lt;/p&gt;

&lt;p&gt;Once the hardware is built, the real value comes from how the system is configured. Instead of treating each Pi as a separate machine, the devices can be joined into a small container cluster using tools like Kubernetes or lightweight alternatives designed for edge environments. This allows applications to run across multiple nodes, giving a surprising amount of resilience for such a small platform.&lt;/p&gt;

&lt;p&gt;Running containers on the cluster means applications can be deployed in exactly the same way they would be in the cloud. A web service, an API, a database, or a monitoring stack can all run locally. For development teams this can be incredibly useful because it creates a small environment that behaves much more like a real production system than a single laptop.&lt;/p&gt;

&lt;p&gt;Networking is another interesting part of the build. Because the cluster sits behind a normal router, it can operate completely independently from the outside world if needed. At the same time, the system can maintain an outbound connection to a management platform so it can be monitored and updated remotely.&lt;/p&gt;

&lt;p&gt;One thing that becomes important very quickly is automation. Even though the cluster is small, treating it like infrastructure rather than a collection of hobby devices makes a big difference. Automated configuration, container orchestration, and health monitoring allow the system to behave like a tiny managed platform rather than a group of individual boards.&lt;/p&gt;

&lt;p&gt;Power resilience also turned out to be more useful than expected. A small UPS can keep the cluster running through short power outages, and because the devices consume very little energy the runtime can be surprisingly long. In some cases the entire cluster uses less power than a single traditional server.&lt;/p&gt;

&lt;p&gt;What makes this approach interesting is not the performance but the flexibility. A Raspberry Pi cluster can be shipped to a location, plugged into power and networking, and immediately provide compute capacity for local workloads. It becomes a simple way to run services at the edge without needing a full rack of servers.&lt;/p&gt;

&lt;p&gt;The concept has started to appear in a few different environments. Some people use these clusters for development and testing. Others use them for edge analytics, IoT processing, or local caching. Because the hardware is inexpensive and easy to replace, it becomes a very forgiving platform to experiment with.&lt;/p&gt;

&lt;p&gt;In many ways it feels like building a miniature version of the cloud, but one that fits inside a small box and can run almost anywhere. For teams that need portable infrastructure, or simply want to experiment with distributed systems on a small scale, it’s a surprisingly powerful setup.&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>sovereign</category>
      <category>cloudcomputing</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Managing 500+ Raspberry Pi Devices in the Real World</title>
      <dc:creator>Steve Rastall</dc:creator>
      <pubDate>Wed, 11 Mar 2026 13:33:55 +0000</pubDate>
      <link>https://forem.com/steve_rastall_303bdea7abe/managing-500-raspberry-pi-devices-in-the-real-world-1dji</link>
      <guid>https://forem.com/steve_rastall_303bdea7abe/managing-500-raspberry-pi-devices-in-the-real-world-1dji</guid>
      <description>&lt;p&gt;Raspberry Pi devices are fantastic for building things quickly. They’re cheap, flexible, and there’s a huge ecosystem around them. But once you move past a few test devices and start running hundreds of them in the real world, things change quite quickly.&lt;/p&gt;

&lt;p&gt;Managing a fleet of 500 or more Pi devices becomes less about the hardware and more about how you operate them.&lt;/p&gt;

&lt;p&gt;The first challenge is simply knowing what you have. When devices are spread across offices, retail locations, factories or remote sites, it’s surprisingly easy to lose track of them. One device gets reimaged, another loses network connectivity, another one is still running software from six months ago. Without some kind of central view, it quickly becomes difficult to understand the health of the fleet.&lt;/p&gt;

&lt;p&gt;Networking is another thing that becomes complicated at scale. A handful of devices connecting back to a server is easy. Hundreds of them connecting from different networks, sometimes behind firewalls or carrier-grade NAT, is much harder. In many deployments the devices cannot accept inbound connections at all, which means the management approach has to be built around outbound connections initiated by the device itself.&lt;/p&gt;

&lt;p&gt;Updates are probably the most important operational concern. When you only have ten devices it’s tempting to SSH into them individually and update them by hand. With hundreds of devices that approach becomes impossible. You need a reliable way to roll out software updates remotely, ideally in stages, and with the ability to roll back if something goes wrong. One broken update can take hundreds of devices offline at the same time if you’re not careful.&lt;/p&gt;

&lt;p&gt;Monitoring is another piece people underestimate. You want to know things like CPU load, disk space, temperature and whether the main application is actually running. If a device stops working you want to know about it quickly rather than discovering the problem weeks later. Lightweight monitoring agents or container health checks can make this much easier.&lt;/p&gt;

&lt;p&gt;Containers have become a really useful way of managing workloads on Raspberry Pi devices. Running applications inside containers means you can keep the underlying system relatively simple and focus on deploying and updating container images instead. It also makes it easier to standardise environments across the fleet.&lt;/p&gt;

&lt;p&gt;Power and storage issues also show up more often than people expect. Many deployments rely on SD cards which eventually fail, especially if the device is writing logs continuously. Using good quality cards, reducing unnecessary writes, and having a simple way to rebuild a device quickly can save a lot of operational pain.&lt;/p&gt;

&lt;p&gt;Another lesson learned from larger deployments is to assume devices will disappear from time to time. A device might lose power, lose network connectivity, or simply fail. Designing the system so that devices can reconnect automatically and resume normal operation makes the whole platform much more resilient.&lt;/p&gt;

&lt;p&gt;At scale, managing Raspberry Pi devices becomes less about tinkering and more about building a small operations platform around them. Central visibility, automated updates, remote monitoring and reliable networking all become essential pieces of the puzzle.&lt;/p&gt;

&lt;p&gt;The hardware itself is still the easy part. The real work is building the operational layer that keeps hundreds of devices running smoothly in the background.&lt;/p&gt;

</description>
      <category>raspberrypi</category>
    </item>
    <item>
      <title>How to Cut Your AWS Bill Yourself</title>
      <dc:creator>Steve Rastall</dc:creator>
      <pubDate>Wed, 11 Mar 2026 13:32:33 +0000</pubDate>
      <link>https://forem.com/steve_rastall_303bdea7abe/how-to-cut-your-aws-bill-yourself-o71</link>
      <guid>https://forem.com/steve_rastall_303bdea7abe/how-to-cut-your-aws-bill-yourself-o71</guid>
      <description>&lt;p&gt;One thing I’ve noticed working with teams running infrastructure on AWS is that most high cloud bills are not caused by complicated architecture. They usually come from a few small things that have been left running quietly in the background for months.&lt;/p&gt;

&lt;p&gt;When someone says their AWS bill is too high, the first thing I suggest is not changing the application or rewriting anything. Instead, spend an hour just looking around the account. You will often find things that simply don’t need to be there.&lt;/p&gt;

&lt;p&gt;A common one is oversized EC2 instances. During development people tend to choose a bigger instance just to make sure everything works smoothly. The problem is those instances often stay that way long after the system goes live. When you check CloudWatch you sometimes see machines running at ten or fifteen percent CPU. That means you are paying for far more capacity than you actually use. In many cases you can resize the instance and the application will behave exactly the same.&lt;/p&gt;

&lt;p&gt;Another thing that builds up surprisingly quickly is unused EBS volumes. These appear when instances are deleted but their storage remains behind. It happens a lot in test environments or when people are experimenting with infrastructure. The volumes sit there doing nothing but still generating charges every month.&lt;/p&gt;

&lt;p&gt;NAT gateways are another one worth checking. They look harmless when you set them up but they can quietly become expensive, especially if multiple environments each have their own. The base cost plus the data processing fees can add up faster than people expect.&lt;/p&gt;

&lt;p&gt;Old load balancers also tend to accumulate. A team might create one for a temporary test environment or an old deployment and then forget about it. Months later it is still running and quietly charging a few pounds a day.&lt;/p&gt;

&lt;p&gt;Data transfer can sometimes be the biggest surprise. If services are talking across regions, or if a lot of traffic is flowing through NAT gateways, the costs can creep up without anyone noticing. Looking through the Cost Explorer for data transfer usage types often reveals things that were never intended.&lt;/p&gt;

&lt;p&gt;Snapshots are another area that slowly grows over time. Backup policies create them automatically and unless someone cleans them up they just continue to accumulate. Many teams have hundreds of snapshots sitting there from instances that no longer even exist.&lt;/p&gt;

&lt;p&gt;Something that works well for development environments is simply turning them off overnight. If a dev environment runs twenty-four hours a day but people only use it during working hours, you are paying for a lot of idle time. Scheduling instances to stop in the evening and start again in the morning can cut development infrastructure costs dramatically.&lt;/p&gt;

&lt;p&gt;In most cases reducing an AWS bill doesn’t require a redesign of the system. It usually comes down to operational housekeeping. Removing unused resources, resizing things that are too large, and periodically reviewing what is actually running can make a big difference surprisingly quickly.&lt;/p&gt;

&lt;p&gt;Cloud infrastructure is powerful, but it’s also very easy to forget what you created last month. Taking the time to look through your account every now and then is often the simplest way to keep costs under control.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscosts</category>
    </item>
  </channel>
</rss>
