<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bram Verhagen</title>
    <description>The latest articles on Forem by Bram Verhagen (@bramverhagen).</description>
    <link>https://forem.com/bramverhagen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bramverhagen"/>
    <language>en</language>
    <item>
      <title>How I accidentally built a production tool with Amazon Q and stubbornness</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Mon, 23 Mar 2026 13:33:50 +0000</pubDate>
      <link>https://forem.com/bramverhagen/how-i-accidentally-built-a-production-tool-with-amazon-q-and-stubbornness-4m9g</link>
      <guid>https://forem.com/bramverhagen/how-i-accidentally-built-a-production-tool-with-amazon-q-and-stubbornness-4m9g</guid>
      <description>&lt;p&gt;I am not a developer. I want to be upfront about that.&lt;/p&gt;

&lt;p&gt;During my bachelor's degree I took a few courses in C, C++ and Java. I understood enough to pass. Later in my career I did some self-study in Python and Terraform, enough to read code, follow what it does and occasionally hack something together. But writing software has never been my job, and I have never pretended otherwise.&lt;/p&gt;

&lt;p&gt;I am an infrastructure architect and cloud consultant. I think in systems, not in syntax. I know what needs to happen; I just rarely write the thing that makes it happen.&lt;/p&gt;

&lt;p&gt;Which made what I did next slightly out of character.&lt;/p&gt;

&lt;h3&gt;
  
  
  The problem: syncing Docker images from Artifactory to AWS ECR
&lt;/h3&gt;

&lt;p&gt;The situation was straightforward enough, at least on paper.&lt;/p&gt;

&lt;p&gt;A supplier was delivering a third-party application we needed to host in AWS. They provided their Docker images through a JFrog Artifactory instance. Our workloads ran in AWS, which meant we needed those images in Amazon ECR, Amazon's own container registry.&lt;/p&gt;

&lt;p&gt;There were two concrete reasons for this. First, good CI/CD practice: pulling images directly from a supplier's Artifactory during a deployment pipeline introduces an external dependency you do not control. If their registry is unavailable, your pipeline fails. Second, we needed the images in ECR to run Amazon Inspector and Shield Advanced scanning with our own company configuration. You cannot point those tools at an external registry.&lt;/p&gt;

&lt;p&gt;So the requirement was clear: when a new image appears in Artifactory, it should automatically show up in ECR. Sounds simple.&lt;/p&gt;

&lt;p&gt;It is not. Behind that single sentence sits a surprisingly awkward set of problems. You need to authenticate to two different systems with entirely different credential models: Artifactory tokens on one side, AWS IAM and Secrets Manager on the other. You need to compare what exists in each registry without pulling every image to find out. You need to handle image tags correctly, filter out what you do not need, avoid re-uploading layers that already exist in ECR and do all of this reliably on a schedule, without managing any infrastructure.&lt;/p&gt;

&lt;p&gt;I looked for an existing tool that handled this combination. Nothing quite fitted. So I decided to build one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter vibe coding
&lt;/h3&gt;

&lt;p&gt;So I opened Amazon Q Developer and started describing my problem.&lt;/p&gt;

&lt;p&gt;Vibe coding, if you have not come across the term, is roughly what it sounds like. Instead of writing code yourself, you describe what you want to an AI coding assistant. It writes the code, explains what it has done and you iterate from there. Back and forth, refining and correcting, until you have something that works. You are the architect; the AI is the developer.&lt;/p&gt;

&lt;p&gt;In practice, it felt surprisingly natural for someone with my background. I could describe the problem in infrastructure terms: registries, authentication, layers, manifests, schedules. Amazon Q translated that into working Terraform and Python. When I did not understand something it had written, I asked. It explained. When the output was not quite right, I pushed back and we tried again.&lt;/p&gt;

&lt;p&gt;I want to be honest, though: it is not magic. Amazon Q is a capable assistant, but it has opinions, blind spots and a habit of steering you towards certain solutions whether or not they are the right ones for your situation. It also, occasionally, makes things up. More on both of those shortly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Working with Amazon Q: the good and the frustrating
&lt;/h3&gt;

&lt;p&gt;Working with Amazon Q felt productive from the start. It understood the problem quickly, asked sensible clarifying questions and got me to a first working prototype faster than I had any right to expect. For someone who does not write code for a living, that early momentum matters.&lt;/p&gt;

&lt;p&gt;But it did not take long to run into the limitations.&lt;/p&gt;

&lt;p&gt;The most instructive example was the question of how to actually move images between registries. There are two fundamentally different approaches to this.&lt;/p&gt;

&lt;p&gt;The first is to use the Docker CLI. You authenticate to both registries, run &lt;code&gt;docker pull&lt;/code&gt; to download the image, &lt;code&gt;docker tag&lt;/code&gt; to relabel it and &lt;code&gt;docker push&lt;/code&gt; to upload it to ECR. It works. Most developers know it. It is the obvious answer if you think about the problem in familiar terms.&lt;/p&gt;

&lt;p&gt;The second approach is to bypass the Docker CLI entirely and use the Docker Registry V2 API together with the AWS SDK. Instead of pulling the full image to disk, you fetch the image manifest, transfer the individual layers directly between registries via HTTP and push the manifest to ECR using boto3. No Docker installation required. No large temporary files on disk. Faster, leaner and far better suited to running inside a Lambda function.&lt;/p&gt;

&lt;p&gt;The second approach is clearly the right one for this use case. A Lambda function has no Docker daemon. Installing and running the Docker CLI inside Lambda is possible, but it is the wrong solution to the right problem.&lt;/p&gt;

&lt;p&gt;Amazon Q understood this in principle. When I explained the constraints: serverless, no Docker daemon, Lambda environment, it would agree and produce code using the API approach. Good. But then, a few exchanges later, it would quietly drift back. A new function would appear that shelled out to &lt;code&gt;docker pull&lt;/code&gt;. A suggestion would sneak in to add Docker as a dependency. Without any fanfare, the Docker CLI approach would be back on the table.&lt;/p&gt;

&lt;p&gt;I lost count of how many times I had to steer it back. Not because Amazon Q was wrong about Docker, the CLI approach does work, but because it kept defaulting to the familiar pattern rather than holding the constraint in mind.&lt;/p&gt;

&lt;p&gt;This taught me something useful: vibe coding requires genuine engagement. You cannot simply accept what the AI produces. You need enough understanding of the problem to recognise when the output is technically correct but contextually wrong. The AI brings the syntax; you still have to bring the judgement.&lt;/p&gt;

&lt;h3&gt;
  
  
  The solution: what the tool actually does
&lt;/h3&gt;

&lt;p&gt;So what did I actually build?&lt;/p&gt;

&lt;p&gt;The tool is a Python Lambda function, deployed with Terraform, that runs on a schedule via Amazon EventBridge. It authenticates to Artifactory using credentials stored in AWS Secrets Manager and accesses ECR through an IAM role, with no static AWS credentials anywhere near the code.&lt;/p&gt;

&lt;p&gt;On each run it does something conceptually simple but practically fiddly: it queries Artifactory for the images and tags that are available, then checks ECR to see what is already there. The function transfers only the delta. If a layer already exists in ECR, it skips it. If a tag has not changed, it skips that too. The goal was to make it cheap to run frequently, not just correct when it runs.&lt;/p&gt;

&lt;p&gt;Filtering is available but opt-in. If you want to narrow down which images and tags get synced, you can provide substring filters. Anything containing that string gets included; everything else is skipped. If you do not configure any filters, everything in the repository is synced. It is simple and it works, though it is worth knowing that it uses substring matching rather than anything more sophisticated.&lt;/p&gt;

&lt;p&gt;The whole thing is open-sourced and available on GitHub:&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/BramVerhagenCapgemini/artifactory_ecr_sync" rel="noopener noreferrer"&gt;artifactory_ecr_sync&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I want to be clear about what it is and what it is not.&lt;/p&gt;

&lt;p&gt;It is not a polished, enterprise-grade product. The code is functional, but a non-developer wrote it with AI assistance and the code quality and conventions reflect that. Do not expect clean abstractions, rigorous error handling throughout or code that would sail through a professional review.&lt;/p&gt;

&lt;p&gt;What it is: a tool that solves a specific problem, offered as-is to anyone who faces the same one. If it needs adapting for your environment, the code is there to read and modify. I provide no support and make no promises.&lt;/p&gt;

&lt;h3&gt;
  
  
  From experiment to production
&lt;/h3&gt;

&lt;p&gt;It works. Probably.&lt;/p&gt;

&lt;p&gt;The tool has been running in a real production environment since I finished building it. As far as I can tell, it has been reliable. When new images show up in Artifactory, they appear in ECR. The Lambda runs on schedule without complaint. I have not hit any show-stopping issues since the initial round of testing and fixing.&lt;/p&gt;

&lt;p&gt;That said, I want to be direct about what this actually is: a personal experiment to learn more about vibe coding. Not a supported product, not a hardened solution and not something I am actively developing or maintaining.&lt;/p&gt;

&lt;p&gt;I provide no guarantees, warranties or assurances of any kind. If you use it, you do so entirely at your own risk. Please read the code before deploying it anywhere. It is not long and it is not complicated. It works for me, in my environment. Your mileage may vary.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I learned – and what this means for (non-)developers
&lt;/h3&gt;

&lt;p&gt;So, should you try it?&lt;/p&gt;

&lt;p&gt;Vibe coding genuinely lowers the barrier to building software. That is not a small thing. Problems that would previously have required finding a developer, writing a brief, waiting for a sprint slot and iterating over weeks can now be explored in an afternoon. For someone like me, technical enough to understand the problem but not a developer, that is a meaningful change.&lt;/p&gt;

&lt;p&gt;But it does not eliminate the need for judgement. If anything, it makes judgement more important. The Docker CLI example is instructive: Amazon Q never produced code that was outright broken, but it repeatedly produced code that was wrong for the context. Catching that required understanding the problem deeply, not just accepting output that looked reasonable.&lt;/p&gt;

&lt;p&gt;The hallucinations matter too. Amazon Q occasionally produced code with confident references to SDK methods that did not behave as described, or subtle logic errors that only surfaced during testing. The fix is straightforward: test everything and read the actual documentation when something does not work. But it does mean you cannot be passive.&lt;/p&gt;

&lt;p&gt;This has implications beyond non-developers like me. For professional developers, AI increasingly handles the heavy lifting. Boilerplate, scaffolding, routine implementation: Amazon Q is genuinely good at all of it. What that leaves is the work that always required a skilled developer: understanding the real constraints, making architectural decisions, recognising when a technically correct solution is contextually wrong and knowing which questions to ask in the first place.&lt;/p&gt;

&lt;p&gt;The creativity, the critical thinking and the judgement are still entirely human. What changes is where developers spend their time, and that means the skills worth developing are shifting too. Less syntax, more systems thinking. Less implementation, more direction.&lt;/p&gt;

&lt;p&gt;If you have a specific, well-defined problem and no developer available, vibe coding is worth trying. The tools are capable. The main constraint is your own clarity about what you actually want.&lt;/p&gt;

&lt;p&gt;Go build something.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>The uncomfortable truth about European cloud sovereignty</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Mon, 09 Mar 2026 13:26:58 +0000</pubDate>
      <link>https://forem.com/bramverhagen/the-uncomfortable-truth-about-european-cloud-sovereignty-5136</link>
      <guid>https://forem.com/bramverhagen/the-uncomfortable-truth-about-european-cloud-sovereignty-5136</guid>
      <description>&lt;p&gt;My recent blog post about AWS European Sovereign Cloud generated more backlash than I anticipated. The core criticism was simple and sharp: AWS is still a US company. No matter how many legal structures you build, no matter how much you isolate the infrastructure, that fundamental fact doesn't change.&lt;/p&gt;

&lt;p&gt;The critics are right. And they're also missing the point.&lt;/p&gt;

&lt;p&gt;This isn't a defence of AWS. It's a reckoning with how we got here and what our actual options are. Because the uncomfortable truth is that we're having this conversation decades too late. We're trying to solve a sovereignty problem we created by failing to invest in European alternatives when it mattered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Europe's missed opportunity
&lt;/h2&gt;

&lt;p&gt;Let's talk about the European cloud providers we do have. StackIt, built by Schwarz Group (the people behind Lidl), represents a serious attempt at European cloud infrastructure. OVHCloud, a French company, has been building data centres and cloud services for years. These aren't trivial efforts. They're substantial businesses run by capable people.&lt;/p&gt;

&lt;p&gt;But let's also be honest about where they are. Neither offers the breadth of services you get from AWS. Neither has the global infrastructure. Neither has the innovation velocity. If you want managed machine learning services, serverless computing at scale, or cutting-edge database technologies, you're not going to find AWS-equivalent offerings there.&lt;/p&gt;

&lt;p&gt;This isn't their fault.&lt;/p&gt;

&lt;p&gt;StackIt and OVHCloud are working with a fraction of the resources that built AWS, Azure and Google Cloud. They're competing against companies that had decades of investment, massive customer bases funding continuous innovation and entire ecosystems of partners and developers.&lt;/p&gt;

&lt;p&gt;AWS didn't become AWS overnight. It had Amazon's cash flow behind it. It had early enterprise customers willing to take risks. It had a US market that was culturally more comfortable with cloud adoption. Most importantly, it had sustained, massive investment year after year.&lt;/p&gt;

&lt;p&gt;European cloud providers never had that. European governments didn't pour billions into cloud infrastructure development. European venture capital didn't fund cloud startups at the same scale. European enterprises didn't commit to home-grown providers the way they needed to for those providers to achieve the scale required to compete.&lt;/p&gt;

&lt;p&gt;We chose convenience over sovereignty. We chose proven solutions over supporting nascent European alternatives. We chose AWS, Azure and Google Cloud because they were better, faster and more complete. And every time we made that choice, we widened the gap.&lt;/p&gt;

&lt;p&gt;Even when we tried to address this, we couldn't get it right. Gaia-X launched in 2020 as a Franco-German initiative to build European cloud infrastructure. The timing was perfect. The ambition was right. It started with the explicit goal of creating a genuine alternative to US cloud providers.&lt;/p&gt;

&lt;p&gt;What happened? It became mired in governance discussions, competing national interests and endless committee meetings. The vision diluted from "build a European cloud" to "create standards for federated cloud services." That's not nothing, but it's not what we needed. Gaia-X transformed from a potential competitor to US clouds into another standards body. The initial momentum evaporated in European bureaucracy and lack of genuine collaboration.&lt;/p&gt;

&lt;p&gt;This is the pattern. Good intentions, fragmented execution. Every major European country wants digital sovereignty, but nobody wants to subordinate their national champion to a truly European effort. Germany protects German interests. France protects French interests. We talk about European solidarity whilst ensuring our own providers get preferential treatment.&lt;/p&gt;

&lt;p&gt;I'm not saying we should have chosen inferior technology out of principle. But we should have invested enough in European alternatives so that choice wasn't necessary. We should have been willing to collaborate across borders on the scale needed to compete. We didn't. And now we're living with the consequences.&lt;/p&gt;

&lt;p&gt;The result is that when organisations need genuine sovereignty today, their options are limited. They can choose European providers that can't match the capability they need. Or they can choose US providers with sovereignty bolt-ons. Neither is ideal. Both are consequences of decisions we made (or didn't make) over the past 15 years.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS changed their story
&lt;/h2&gt;

&lt;p&gt;When AWS first started talking about European Sovereign Cloud, their messaging was confident. The infrastructure would be completely isolated. European customers would have total protection. The implication was clear: this solves the sovereignty problem.&lt;/p&gt;

&lt;p&gt;I had questions. Lots of them.&lt;/p&gt;

&lt;p&gt;The infrastructure might be isolated, but what about the legal connection to Amazon LLC? How independent can it really be when the parent company is subject to US law? What happens if US authorities invoke the CLOUD Act? How do you guarantee that future geopolitical pressures won't create access requirements? Who actually controls the kill switch?&lt;/p&gt;

&lt;p&gt;To AWS's credit, they engaged with these questions. But their answers evolved. The messaging shifted from "completely isolated" to something more nuanced: "we've done our utmost best to make it as hard as possible for the US to gain control."&lt;/p&gt;

&lt;p&gt;That shift matters. It's the difference between claiming you've solved a problem and acknowledging you've mitigated it as much as practically possible.&lt;/p&gt;

&lt;p&gt;The new framing is more honest. Yes, there's a German legal entity. Yes, European staff only. Yes, all the hardware designs and software code are available for the red button scenario. But no, we can't guarantee that geopolitical pressure could never create a situation where US authorities demand access.&lt;/p&gt;

&lt;p&gt;I actually respect this evolution. The initial messaging was too confident. The revised messaging acknowledges constraints whilst explaining the mitigations. That's more useful than false certainty.&lt;/p&gt;

&lt;p&gt;But it also reveals something important: there are limits to what any US company can promise about sovereignty, no matter how much they isolate the infrastructure. The parent company's jurisdiction creates inherent constraints. Legal structures and operational controls can make those constraints harder to exploit, but they can't eliminate them entirely.&lt;/p&gt;

&lt;p&gt;This isn't unique to AWS. Microsoft and Google would face the same issues. Any US cloud provider trying to offer a truly sovereign European service would hit the same walls. The question isn't whether these constraints exist. It's whether the mitigations are sufficient for your specific risk tolerance.&lt;/p&gt;

&lt;h2&gt;
  
  
  No silver bullets, just trade-offs
&lt;/h2&gt;

&lt;p&gt;Here's what I wish more people understood about sovereignty: there is no perfect solution. Every option involves trade-offs. The question isn't "which solution is flawless?" but rather "which trade-offs can I live with?"&lt;/p&gt;

&lt;p&gt;Pure European cloud providers give you stronger sovereignty guarantees. But you sacrifice innovation velocity and service breadth. You might wait years for capabilities that AWS ships next quarter. That delay has real business cost.&lt;/p&gt;

&lt;p&gt;AWS ESC gives you innovation and capability. But you accept that the parent company is American and subject to US jurisdiction. You're trusting legal structures and operational controls to create meaningful barriers, whilst knowing those barriers aren't absolute.&lt;/p&gt;

&lt;p&gt;For some organisations, ESC makes complete sense. Regulated industries that need specific compliance guarantees. Government agencies handling sensitive data. Critical infrastructure operators. Companies in sectors where geopolitical risk is material. If you're in these categories and you need cloud capabilities that European providers can't yet deliver, ESC is a pragmatic choice.&lt;/p&gt;

&lt;p&gt;For others, it might not be. If you're a startup optimising for speed and cost, the additional sovereignty layer might be unnecessary overhead. If you're not handling particularly sensitive data, standard AWS regions might be sufficient. If you can wait for European providers to mature, supporting them might align with your values.&lt;/p&gt;

&lt;p&gt;The key is being honest about what you're getting and what you're giving up. ESC isn't "AWS but completely safe from US influence." It's "AWS with significant additional barriers to US influence, which might be sufficient depending on your threat model."&lt;/p&gt;

&lt;p&gt;That's less satisfying than a simple answer. But it's accurate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens next
&lt;/h2&gt;

&lt;p&gt;We're having the wrong conversation when we argue about whether ESC is "good enough." The real conversation should be about why we're in this position in the first place.&lt;/p&gt;

&lt;p&gt;European organisations need cloud capabilities. That's not optional anymore. Digital transformation, data analytics, machine learning – these aren't nice-to-haves. They're foundational to remaining competitive.&lt;/p&gt;

&lt;p&gt;We also need sovereignty over critical digital infrastructure. Recent years have made that abundantly clear. Geopolitical stability we took for granted has proven fragile. Supply chains we thought were permanent have become negotiating chips. Digital infrastructure that seemed neutral has become caught up in power politics.&lt;/p&gt;

&lt;p&gt;The fact that we have to choose between capability and sovereignty is a policy failure. We should have invested in European cloud providers a decade ago. We should have committed government workloads to them to help them achieve scale. We should have funded the research and development needed to compete with US innovation.&lt;/p&gt;

&lt;p&gt;We didn't. And now we're retrofitting sovereignty onto US infrastructure because that's the best available option for many use cases.&lt;/p&gt;

&lt;p&gt;ESC is a response to a problem we created. It's not a perfect response. But given where we are, it's a reasonable one for organisations that need both capability and sovereignty.&lt;/p&gt;

&lt;p&gt;What would be better? Actually investing in European alternatives now, before the gap becomes completely unbridgeable. Government cloud programmes that commit to European providers. Investment in open-source cloud infrastructure. Support for European companies building cloud-native services.&lt;/p&gt;

&lt;p&gt;Will that happen? I don't know. Europe tends to regulate American technology rather than build alternatives to it. GDPR was easier than creating European competitors to Google and Facebook. Arguing about ESC's sovereignty guarantees is easier than funding European cloud providers properly.&lt;/p&gt;

&lt;p&gt;But if we want genuine digital sovereignty in 10 years, we need to start building it now. ESC can be part of the bridge to that future. But it can't be the destination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Living with imperfection
&lt;/h2&gt;

&lt;p&gt;I still think ESC is valuable for organisations that need it. The supply chain transparency, legal independence and operational controls are meaningful improvements over standard AWS regions. For entities handling sensitive data or operating critical infrastructure, those improvements matter.&lt;/p&gt;

&lt;p&gt;But I'm not going to pretend it's a complete solution to European digital sovereignty. It's a US company's answer to European sovereignty concerns. That comes with inherent limitations.&lt;/p&gt;

&lt;p&gt;The critics who say "AWS is still American" are correct. My response is: yes, and what's your alternative? If you need cloud capabilities that European providers can't deliver yet, what do you actually do? Wait and hope? Build everything yourself? Accept higher risk in standard regions?&lt;/p&gt;

&lt;p&gt;ESC exists because European organisations need a pragmatic answer to that question. It's not the answer I wish we had. I wish we had European cloud providers with AWS-equivalent capabilities. But we don't, because we didn't invest in building them.&lt;/p&gt;

&lt;p&gt;So we work with what we have. We push AWS to be as transparent and accountable as possible. We use ESC where it makes sense. We support European alternatives where they can meet our needs. And hopefully, we start investing seriously in building the digital infrastructure we should have built 15 years ago.&lt;/p&gt;

&lt;p&gt;That's not a satisfying conclusion. But it's an honest one. And right now, honesty about our constraints seems more valuable than pretending we have perfect solutions.&lt;/p&gt;

</description>
      <category>sovereignty</category>
      <category>cloud</category>
      <category>aws</category>
      <category>esc</category>
    </item>
    <item>
      <title>AWS European Sovereign Cloud: Beyond data sovereignty</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Thu, 05 Feb 2026 09:35:04 +0000</pubDate>
      <link>https://forem.com/bramverhagen/aws-european-sovereign-cloud-beyond-data-sovereignty-51j5</link>
      <guid>https://forem.com/bramverhagen/aws-european-sovereign-cloud-beyond-data-sovereignty-51j5</guid>
      <description>&lt;p&gt;When I wrote about AWS' Digital Sovereignty Pledge earlier, I approached it primarily from a data perspective. I focused on where your data lives, who can access it and how you can control it. That made sense to me at the time. Data sovereignty felt like the whole story.&lt;/p&gt;

&lt;p&gt;Recent global developments have taught me otherwise. The world has become a more unpredictable place. We've seen how quickly geopolitical tensions can escalate and how supply chains can become leverage points. I've come to understand that sovereignty isn't just about data. It has supply chain, legal and operational dimensions that are equally important.&lt;/p&gt;

&lt;p&gt;This realisation is what makes AWS European Sovereign Cloud (ESC) so relevant today. It addresses all these angles in ways I hadn't fully appreciated before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Historical perspective: not a reaction, but a roadmap
&lt;/h2&gt;

&lt;p&gt;Before diving into what ESC offers, it's worth understanding that this isn't a knee-jerk response to recent political changes. AWS began developing ESC well before the current US administration took office. This matters because it shows deliberate, long-term planning rather than reactive scrambling.&lt;/p&gt;

&lt;p&gt;AWS has built sovereign cloud offerings before. They created GovCloud for US federal agencies that need to meet strict compliance requirements. They established a separate China region, completely decoupled from the rest of AWS infrastructure. These weren't experiments. They were proof points that AWS could deliver full cloud capabilities within specific sovereignty boundaries.&lt;/p&gt;

&lt;p&gt;ESC follows this same pattern, but it's designed for European organisations with European requirements. The planning started years ago. The recent geopolitical shifts have simply made the need more urgent and the value more obvious.&lt;/p&gt;

&lt;h2&gt;
  
  
  The supply chain challenge: facing reality
&lt;/h2&gt;

&lt;p&gt;Let's be honest about something uncomfortable. All server hardware has components produced in China. The most advanced chips come from the US. There's no escaping this reality. You can't build a modern data centre without touching these supply chains.&lt;/p&gt;

&lt;p&gt;So when AWS talks about sovereignty, they're not pretending they've solved the unsolvable. They're being pragmatic about what's actually achievable.&lt;/p&gt;

&lt;p&gt;Here's what they've done instead. AWS has made all their hardware designs and software code for ESC available. This is the 'red button' scenario. In the extremely unlikely event that access to AWS infrastructure or supply chains is cut off, European operators would have everything they need to continue running the service.&lt;/p&gt;

&lt;p&gt;Is this perfect? No. But it's honest. It acknowledges the constraints whilst providing the strongest possible mitigation. That matters more than impossible promises.&lt;/p&gt;

&lt;h2&gt;
  
  
  The legal dimension: where jurisdiction actually means something
&lt;/h2&gt;

&lt;p&gt;This is where ESC gets interesting from a governance perspective. The entire ESC operation is captured within a German legal entity. Not a subsidiary that ultimately answers to Seattle. A German entity operating under German law.&lt;/p&gt;

&lt;p&gt;Yes, the US can invoke the CLOUD Act. That's a fact. But here's the crucial difference: they would need to go through German courts to enforce it. They would need German judges to grant those requests. And to date, US authorities have never successfully compelled data access this way, even through US courts.&lt;/p&gt;

&lt;p&gt;This isn't theoretical protection. It's a genuine legal barrier. European data protection authorities understand this. It's why they can be more comfortable with ESC than with standard AWS regions.&lt;/p&gt;

&lt;p&gt;The legal structure creates real friction for any attempt at extraterritorial data access. That friction is the point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The operational reality: European staff only
&lt;/h2&gt;

&lt;p&gt;The operational sovereignty piece is refreshingly straightforward. ESC is staffed entirely by European citizens. No outsourcing to India for cost savings. No escalations to US headquarters for certain types of issues. Everything is handled within Europe by Europeans.&lt;/p&gt;

&lt;p&gt;This might sound simple, but the implications are significant. It means conversations about your infrastructure, your data and your compliance needs happen with people who understand European regulatory frameworks firsthand. They're not reading from a script developed elsewhere.&lt;/p&gt;

&lt;p&gt;It also means that all support and maintenance work to keep the cloud available stays within European jurisdiction. There's no scenario where someone in Seattle needs to be involved in maintaining the infrastructure that underpins your workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  What stays the same: the good bits
&lt;/h2&gt;

&lt;p&gt;Here's what ESC doesn't change: the quality of the infrastructure, the breadth of services and the pace of innovation.&lt;/p&gt;

&lt;p&gt;You still get the same AWS services you'd get in Frankfurt or Ireland. The same security capabilities I discussed in my earlier blog post still apply. You can still encrypt everything everywhere. You still have control over where your data lives and who can access it.&lt;/p&gt;

&lt;p&gt;ESC isn't a stripped-down version of AWS. It's AWS with an additional layer of sovereignty protection. The cloud is just as resilient. The services are just as innovative. The performance is just as strong.&lt;/p&gt;

&lt;p&gt;What you're adding is supply chain transparency, legal independence and operational control. You're not trading away capability to get it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters now
&lt;/h2&gt;

&lt;p&gt;The world has become more volatile in recent years. That's not political commentary. It's just observation. We've seen how quickly stable relationships can become contentious. We've watched supply chains that seemed unshakeable prove fragile.&lt;/p&gt;

&lt;p&gt;European organisations need to plan for a world where digital infrastructure might become caught up in geopolitical disputes. ESC provides a credible answer to that risk without requiring you to abandon the cloud or compromise on capability.&lt;/p&gt;

&lt;p&gt;It's not paranoia to consider these scenarios anymore. It's prudence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;If sovereignty beyond just data protection matters to your organisation, ESC deserves serious consideration. The supply chain transparency, legal structure and operational independence it provides are genuine differentiators.&lt;/p&gt;

&lt;p&gt;The time to think about these questions is before you need the answers. If you want to explore how ESC could work for your specific requirements, I'm happy to discuss it.&lt;/p&gt;

&lt;p&gt;The conversation isn't about whether sovereignty matters. Recent years have settled that question. The conversation is about what sovereignty actually means in practice and how you achieve it without sacrificing innovation.&lt;/p&gt;

&lt;p&gt;ESC is AWS' answer to that question for European organisations. It's not perfect because nothing is. But it's thoughtful, comprehensive and genuine. And right now, that's what matters.&lt;/p&gt;

</description>
      <category>sovereignty</category>
      <category>aws</category>
      <category>cloud</category>
      <category>europe</category>
    </item>
    <item>
      <title>Exploring AWS Sovereign Cloud: A Guide for Enterprises</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Thu, 28 Mar 2024 15:05:40 +0000</pubDate>
      <link>https://forem.com/bramverhagen/exploring-aws-sovereign-cloud-a-guide-for-enterprises-4ojf</link>
      <guid>https://forem.com/bramverhagen/exploring-aws-sovereign-cloud-a-guide-for-enterprises-4ojf</guid>
      <description>&lt;p&gt;In the digital age, organisations are increasingly recognising the importance of data sovereignty. Basically, the ability to control the location, access, and security of their data. As a leading cloud provider, AWS is committed to helping you achieve digital sovereignty through its Digital Sovereignty Pledge. In this blog post, I will explore the concept of digital sovereignty and how AWS' Digital Sovereignty Pledge can help you maintain control over your data. I will also provide a roadmap for you to get started on your journey to sovereignty in the cloud. If you have any questions or would like to learn more about how AWS can help you achieve digital sovereignty, contact me today.&lt;/p&gt;

&lt;h1&gt;
  
  
  Digital sovereignty: understanding the concept
&lt;/h1&gt;

&lt;p&gt;Digital sovereignty has emerged as a pivotal concept in the modern digital landscape. Gaining prominence as concerns about data privacy, security, and compliance escalate. It encompasses two fundamental aspects: data sovereignty and operational sovereignty. Data sovereignty refers to an organisation's ability to control the location and processing of its data. While operational sovereignty entails the authority to make independent decisions regarding data management practises.&lt;br&gt;
Digital sovereignty empowers you to take charge of your data's destiny. This includes determining where it resides, who has access to it, and the purposes for which it is used. By exercising control over data, organisations can safeguard sensitive information, comply with regulatory requirements, and maintain their reputation.&lt;br&gt;
Digital sovereignty is important because it keeps your data safe from people who shouldn't have it. This includes people inside and outside the organisation. It mitigates the risk of data breaches, unauthorised data transfers, and data loss. While preserving the integrity and confidentiality of sensitive information. This becomes particularly crucial in industries that handle vast amounts of personal or confidential data, such as healthcare, finance, and government. The protection of intellectual property is crucial in the realm of high technology, making sovereignty a significant factor to consider.&lt;/p&gt;

&lt;h1&gt;
  
  
  Digital Sovereignty Pledge
&lt;/h1&gt;

&lt;p&gt;AWS' Digital Sovereignty Pledge is a set of commitments designed to help you retain control over your data, protect it from unauthorised access, and ensure its integrity and confidentiality. The pledge is based on four key principles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control over the location of your data:&lt;/strong&gt;&lt;br&gt;
You have full control over the location of your data, choosing the regions where it is stored and processed. This ensures that data remains within the desired geographic boundaries and complies with specific regulatory requirements.&lt;br&gt;
&lt;strong&gt;Verifiable control over data access:&lt;/strong&gt;&lt;br&gt;
You maintain granular control over access to your data, determining who can access it and the specific permissions granted. This enables you to implement robust security measures and audit trails. Guaranteeing that only authorised individuals have access to sensitive information.&lt;br&gt;
&lt;strong&gt;The ability to encrypt everything everywhere:&lt;/strong&gt;&lt;br&gt;
AWS provides comprehensive encryption capabilities, allowing you to encrypt your data at rest, in transit, and during processing. This multi-layered approach further enhances data protection and mitigates the risk of unauthorised access.&lt;br&gt;
&lt;strong&gt;Resilience of the cloud:&lt;/strong&gt;&lt;br&gt;
AWS' cloud infrastructure is designed to be highly resilient, with multiple layers of redundancy and disaster recovery mechanisms. This ensures that organizations' data remains accessible and protected even in the event of hardware failures or natural disasters.&lt;/p&gt;

&lt;p&gt;By adhering to these principles, AWS empowers you to achieve digital sovereignty and safeguard your data in the cloud. The pledge provides a framework for organisations to maintain control over their data, ensuring compliance with regulatory requirements and preserving the integrity and confidentiality of sensitive information.&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/security/aws-digital-sovereignty-pledge-control-without-compromise/" rel="noopener noreferrer"&gt;AWS Digital Sovereignty Pledge: Control without compromise&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Control over the location of your data
&lt;/h1&gt;

&lt;p&gt;AWS gives you control over the geographic location of your data. This includes the ability to choose the specific data centres and regions where your data is stored, processed, and replicated. You can retain data in the region of your choice and avoid transferring data across borders unless you explicitly choose to do so.&lt;br&gt;
This level of control is essential for organisations that operate in multiple countries or regions, or that are subject to strict data privacy regulations. By choosing the location of your data, you can ensure that it remains within the jurisdiction of your country or region, and that it is protected by the appropriate laws and regulations.&lt;br&gt;
In addition, AWS provides you with granular control over the location of your data within each region. You can choose to store your data in a single availability zone, or you can distribute it across multiple availability zones for redundancy and availability. You can also choose to replicate your data to other regions for disaster recovery purposes.&lt;br&gt;
By giving you control over the location of your data, AWS helps you meet your data sovereignty requirements and protect your data from unauthorised access.&lt;/p&gt;

&lt;h1&gt;
  
  
  Verifiable control over data access
&lt;/h1&gt;

&lt;p&gt;With AWS, you have verifiable control over data access. This means you can easily see who can access your data and when. This commitment is supported by several key features:&lt;br&gt;
&lt;strong&gt;The ability to verify who can access your data and when&lt;/strong&gt;:&lt;br&gt;
AWS provides detailed logs and reports that allow you to track and monitor all access to your data. This can include information about the user, the time of access, the IP address, and the type of access (read, write, delete, etc.). This level of transparency enables you to quickly identify any suspicious or unauthorised access attempts.&lt;br&gt;
&lt;strong&gt;The Nitro approach: Protection from cloud operators&lt;/strong&gt;:&lt;br&gt;
AWS employs a unique approach called Nitro to protect data from potential unauthorised access by cloud operators. Nitro is a custom-designed chip and operating system that is integrated into AWS' servers. It provides a hardware-based root of trust and establishes a secure communication channel between the server and the customer's virtual machines. This prevents cloud operators from accessing customer data, even if they have physical access to the servers.&lt;br&gt;
&lt;strong&gt;The ability to revoke access to your data at any time&lt;/strong&gt;:&lt;br&gt;
AWS allows you to revoke access to your data at any time, for any reason. This can be done through the AWS console, the AWS CLI, or the AWS SDK. Revoking access immediately terminates all active sessions and prevents the user from accessing the data again.&lt;br&gt;
&lt;strong&gt;The ability to audit access to your data&lt;/strong&gt;:&lt;br&gt;
AWS provides comprehensive auditing capabilities that allow you to track and review all access to your data. This includes the ability to generate reports, set up alerts, and perform forensic analysis. The audit logs can be used to identify trends, detect anomalies, and investigate security incidents.&lt;br&gt;
&lt;strong&gt;A shared responsibility model&lt;/strong&gt;:&lt;br&gt;
AWS operates on a shared responsibility model, where AWS is responsible for the security of the cloud infrastructure, and the customer is responsible for the security of their data and applications.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3bvlskiovepax997vg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3bvlskiovepax997vg2.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;This model provides you with flexibility and control over your data, while ensuring that AWS maintains the highest levels of physical and cybersecurity.&lt;br&gt;
With these measures, AWS lets you control who can access your data. This protects sensitive information and helps you meet regulatory requirements.&lt;/p&gt;

&lt;h1&gt;
  
  
  The ability to encrypt everything everywhere
&lt;/h1&gt;

&lt;p&gt;AWS enables you to encrypt your data at rest, in transit, and in use. This powerful capability ensures that your data remains protected throughout its lifecycle, even in the event of a security breach.&lt;br&gt;
You can choose to manage your own encryption keys or use AWS Key Management Service (AWS KMS). AWS KMS is a highly secure and scalable cloud-based key management service that allows you to create, manage, and control the use of encryption keys. With AWS KMS, you can easily encrypt your data and control access to it, ensuring that only authorised users can decrypt it.&lt;br&gt;
You can also control who has access to your encrypted data. AWS allows you to define fine-grained access policies that specify who can access your data and what they can do with it. This level of control helps you protect your data from unauthorised access and use.&lt;br&gt;
AWS also provides robust auditing and logging capabilities that allow you to track and monitor encryption-related activities. This information can be used to detect and investigate security incidents, and to ensure compliance with regulatory requirements.&lt;br&gt;
AWS KMS External Key Store (XKS) is a new feature that allows you to store your encryption keys in a hardware security module (HSM) that you own and manage. This provides an additional layer of security for your encryption keys, as they are never stored in the cloud. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73appdqh5ftp7j83uu6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73appdqh5ftp7j83uu6t.png" alt=" " width="800" height="298"&gt;&lt;/a&gt;With AWS XKS, you can be confident that your encryption keys are protected from unauthorised access, even if AWS itself were to be compromised.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resilience of the cloud
&lt;/h1&gt;

&lt;p&gt;The resilience of the cloud is paramount for organisations of all sizes, as it ensures the uninterrupted availability and accessibility of critical applications and data. AWS stands out in this regard, offering a highly resilient platform that empowers you to operate with confidence.&lt;br&gt;
AWS' global infrastructure offers diverse options for you to deploy applications and data. It consists of multiple regions and availability zones. This geographic distribution significantly reduces the risk of disruptions caused by natural disasters, power outages, or regional network failures. Even if one region experiences an issue, applications and data can be seamlessly rerouted to other regions, ensuring continuous operation.&lt;br&gt;
In addition to its global infrastructure, AWS offers Local Zones and Outposts, which bring cloud services closer to you. Local Zones are strategically located in major metropolitan areas, providing ultra-low latency access to cloud services for applications that require real-time processing or proximity to end-users. Outposts, on the other hand, are on-premises infrastructure that extends AWS services to your own data centres or remote locations. This hybrid approach allows you to leverage the benefits of the cloud while maintaining control over sensitive data or following specific regulatory requirements.&lt;br&gt;
Furthermore, AWS' Snow Family provides a solution for customers with remote or limited connectivity locations. Snow devices are portable data transfer appliances. They let you securely transfer data to and from AWS, even without a reliable internet connection. This capability is particularly valuable for organisations operating in remote areas, such as mining sites, oil rigs, or disaster-stricken regions.&lt;br&gt;
With AWS' reliable cloud infrastructure, you can: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improve your operational efficiency.&lt;/li&gt;
&lt;li&gt;Reduce risks.&lt;/li&gt;
&lt;li&gt;Ensure your critical applications and data are always available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether operating globally, locally, or in remote environments, AWS provides the flexibility and reliability that organisations need to thrive in today's competitive landscape.&lt;/p&gt;

&lt;h1&gt;
  
  
  The old and existing plus the new
&lt;/h1&gt;

&lt;p&gt;AWS' public cloud has always been built on a foundation of security and compliance. The Digital Sovereignty Pledge builds on this foundation by providing you with even more control over your data and applications. This includes the ability to choose the geographic location of your data, control who has access to it, and encrypt it using your own keys.&lt;br&gt;
One of the new features introduced with the Digital Sovereignty Pledge is the ability to fully isolate encryption keys from the cloud. This means that you can now generate, manage, and store your encryption keys completely on-premises. This provides an additional layer of security and control for organisations who are concerned about the security of their data in the cloud.&lt;br&gt;
The Digital Sovereignty Pledge also includes a number of other features that give you more control over your data, including:&lt;br&gt;
&lt;strong&gt;The ability to choose the geographic location of your data.&lt;/strong&gt;&lt;br&gt;
You can choose to store your data in any of AWS' regions around the world. This gives you the flexibility to choose the location that best meets your needs for data privacy and compliance.&lt;br&gt;
&lt;strong&gt;Control over who has access to your data.&lt;/strong&gt; You can use IAM to control who has access to your AWS resources and what they can do with them. This allows you to restrict access to your data to only those who need it.&lt;br&gt;
&lt;strong&gt;The ability to encrypt your data using your own keys.&lt;/strong&gt; You can use AWS KMS to encrypt your data using your own keys. This gives you complete control over the encryption and decryption of your data.&lt;br&gt;
AWS' Digital Sovereignty Pledge is a powerful tool that gives you more control over your data and applications in the cloud. This pledge provides you with the flexibility, security, and compliance that they need to meet your business requirements.&lt;/p&gt;

&lt;h1&gt;
  
  
  Your journey to the sovereign cloud
&lt;/h1&gt;

&lt;p&gt;Migrating to a sovereign cloud environment is a strategic decision that requires careful planning and execution. Here's a roadmap to help you get started:&lt;br&gt;
&lt;strong&gt;Assess your current data sovereignty posture:&lt;/strong&gt;&lt;br&gt;
The first step is to assess your organisation's current data sovereignty posture. This includes identifying the location of your data, who has access to it, and what security measures are in place to protect it.&lt;br&gt;
&lt;strong&gt;Identify your data sovereignty requirements:&lt;/strong&gt;&lt;br&gt;
Once you know your current data sovereignty posture, you can identify your data sovereignty requirements. This includes determining which data must be stored in a sovereign cloud environment and what level of control you need over that data.&lt;br&gt;
&lt;strong&gt;Implement the necessary AWS services and features:&lt;/strong&gt;&lt;br&gt;
AWS offers a variety of services and features that can help you meet your data sovereignty requirements. These include the ability to choose the geographic location of your data, control who has access to it, and encrypt it using your own keys.&lt;br&gt;
&lt;strong&gt;Monitor and audit your sovereign cloud environment:&lt;/strong&gt;&lt;br&gt;
Once you've set up the AWS services and features you need, it's important to keep an eye on your sovereign cloud environment. This will help you make sure it's running safely and following your data sovereignty rules.&lt;br&gt;
&lt;strong&gt;Get started today:&lt;/strong&gt;&lt;br&gt;
AWS is committed to helping you achieve your data sovereignty goals. Contact me today to learn more about our Digital Sovereignty Pledge and how I can help you get started on your journey to the sovereign cloud.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sovereignty</category>
      <category>awscommunity</category>
      <category>cloud</category>
    </item>
    <item>
      <title>From Datacenter Communication to Web Communication: The Evolution of Networking for Distributed Applications</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Tue, 12 Dec 2023 15:17:56 +0000</pubDate>
      <link>https://forem.com/bramverhagen/from-datacenter-communication-to-web-communication-the-evolution-of-networking-for-distributed-applications-10nj</link>
      <guid>https://forem.com/bramverhagen/from-datacenter-communication-to-web-communication-the-evolution-of-networking-for-distributed-applications-10nj</guid>
      <description>&lt;p&gt;In the early days of enterprise computing, applications were hosted on servers located in on-premises datacenters. Since these servers were in close physical proximity, the applications could communicate with low latency and high trust. Datacenters were walled gardens with limited connectivity to the outside world. &lt;/p&gt;

&lt;p&gt;This model worked well when applications were monolithic and self-contained within a datacenter. However, as usage grew and applications needed to span multiple datacenters and are now distributed in the cloud, new networking and trust approaches were required.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Emergence of the Internet
&lt;/h1&gt;

&lt;p&gt;As the popularity of the internet grew, the need to connect datacenters and consolidate resources became evident. The first step was to connect datacenters over private wide-area networks (WANs) using technologies like virtual private networks (VPNs) and dedicated leased lines. While these connections provided a means to connect distributed environments, they came with inherent latency and were expensive for dedicated bandwidth.&lt;/p&gt;

&lt;h1&gt;
  
  
  Latency Over Long Distances
&lt;/h1&gt;

&lt;p&gt;Stretching applications across datacenters increased latency for communications between application components. Technologies like VPNs and dedicated lines could provide security and quality of service, but they couldn't overcome the laws of physics. The physical distance imposed delays. &lt;br&gt;
To cope, applications had to be re-architected using protocols better suited for high-latency environments. This drove adoption of technologies like HTTP, REST, and SOAP. These protocols exchanged structured data payloads and could tolerate occasional delays or failures. Monolithic applications were broken down into services communicating through web APIs. &lt;/p&gt;

&lt;h1&gt;
  
  
  Cost of Dedicated Connectivity
&lt;/h1&gt;

&lt;p&gt;Maintaining dedicated leased lines between datacenters was expensive. Most enterprises could only afford a limited mesh with one or two links between locations. This constrained options for disaster recovery and load balancing across sites. &lt;br&gt;
The proliferation of the internet provided a lower-cost alternative for connectivity. Direct connections were replaced with encrypted internet VPN tunnels. Rather than just linking datacenters, enterprises could now also connect branch offices and support remote workers. &lt;/p&gt;

&lt;h1&gt;
  
  
  The Web Protocol Takes Over
&lt;/h1&gt;

&lt;p&gt;The HTTP protocol underlying the web was designed to work well over variable latency networks like the public internet. HTTP is asynchronous and stateless, overcoming many of the issues caused by latency over the WAN. &lt;br&gt;
This meant web technologies could now be used to build enterprise applications that were resilient to network latency. For example, a REST API over HTTP is more tolerant of high latency than a custom RPC protocol. &lt;br&gt;
As a result, enterprises started adopting web technologies internally. This allowed them to use the public internet for connectivity, reducing reliance on expensive private networks. &lt;/p&gt;

&lt;h1&gt;
  
  
  From VPNs to Open Internet
&lt;/h1&gt;

&lt;p&gt;The shift to web protocols like HTTP enabled enterprises to connect datacenters over the open internet instead of private WANs. This significantly reduced connectivity costs while providing similar resilience to latency. &lt;br&gt;
While VPNs were still useful for their security properties, they were no longer required just to interconnect datacenters. The savings from using public internet rather than leased lines more than offset the relatively low cost of internet connectivity. &lt;/p&gt;

&lt;h1&gt;
  
  
  Rethinking Trust Boundaries
&lt;/h1&gt;

&lt;p&gt;In the early datacenter model, trust was implicit. Applications could freely interact because they were secured within the same four walls. Authentication centered around usernames and passwords for human users. &lt;br&gt;
With workloads distributed across locations, a new approach was needed. Just because the same enterprise owned two application components didn't mean they could blindly trust each other. The perimeter was fuzzier.&lt;br&gt;
Despite the advancements in web communication, corporate authentication and authorization mechanisms continued to rely on the concept of trust within the datacenter environment. With the expansion of distributed applications and the shift towards web communication, this approach became outdated, necessitating the development of new standards. &lt;br&gt;
Standards like SAML and OAuth were developed to address these issues. Mechanisms like single-sign-on, access tokens, and certificates enabled finer-grained authentication and authorization between services. Security became more granular and context-aware.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Path Forward
&lt;/h1&gt;

&lt;p&gt;The journey from monolithic applications in isolated datacenters to distributed cloud-native architectures required rethinking network connectivity, application architecture, and security models. As enterprises adopt cloud and SaaS technologies, the transformation will continue. &lt;br&gt;
Latency remains a challenge, but modern protocols, caching, asynchronous designs, and geographic distribution provide tools to minimise its impact. Trust has moved from the network layer to the application layer, with standards like OAuth replacing VPNs.&lt;br&gt;
Each shift along the way required changing how networks were designed, applications were built, and security was implemented. While challenging at the time, each step ultimately enabled enterprises to build more scalable, resilient, and cost-effective systems. The next phase of the journey will likely bring its own set of transformations, but the trajectory is clear: distributed, internet-scale architectures running in the cloud. &lt;/p&gt;

</description>
      <category>datacentretransformation</category>
      <category>cloudnative</category>
      <category>distributedapplications</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>Why SaaS is the Best Option for COTS Software</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Tue, 28 Nov 2023 14:06:18 +0000</pubDate>
      <link>https://forem.com/bramverhagen/why-saas-is-the-best-option-for-cots-software-3aee</link>
      <guid>https://forem.com/bramverhagen/why-saas-is-the-best-option-for-cots-software-3aee</guid>
      <description>&lt;p&gt;As companies move to the cloud, there's often a temptation to simply "lift and shift" existing workloads, including Commercial Off-The-Shelf (COTS) software, to Infrastructure as a Service (IaaS) providers like AWS. However, this approach doesn't fully utilise the benefits of the cloud. For COTS applications in particular, Software as a Service (SaaS) is often the better choice. Here's why:&lt;/p&gt;

&lt;h1&gt;
  
  
  Cloud is Not Just Another Datacenter
&lt;/h1&gt;

&lt;p&gt;When moving COTS applications to the public cloud, it's easy to treat it like just another datacenter. But the cloud offers unique advantages in scalability, automation, and opex vs. capex spending. Simply shifting COTS to IaaS doesn't unlock these benefits - the application still needs to be maintained and managed much like traditional on-prem software.&lt;br&gt;
Read more in my other &lt;a href="https://dev.to/bramverhagen/why-cloud-computing-is-not-just-another-datacenter-solution-understanding-the-difference-4mkf"&gt;post&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  SaaS Enables a "Shift Left" Culture
&lt;/h1&gt;

&lt;p&gt;With a SaaS model, the COTS provider handles all hardware management, software maintenance, and upkeep. This shifts the focus left for your organisation, away from managing infrastructure and towards innovating on top of it. SaaS enables your technical teams to spend time optimising the business value of software rather than just "keeping the lights on." When considering the move of a COTS solution to the cloud, it's important to understand the benefits of SaaS. SaaS can help organisations focus on innovating and building new products and services, rather than on the maintenance and upkeep of existing software. It can also provide a more stable and reliable platform, as the COTS provider is the one responsible for the underlying infrastructure. Additionally, SaaS can be beneficial from a cost perspective, as it can help to reduce the need for in-house IT.&lt;/p&gt;

&lt;h1&gt;
  
  
  Integrate via Modern APIs
&lt;/h1&gt;

&lt;p&gt;COTS applications moved to the public cloud often still require custom integrations to interface with other systems. But SaaS offerings expose modern APIs using REST, gRPC, or GraphQL, enabling flexible integrations without touching the core software. This abstraction keeps integrations separate from the SaaS application itself. Moreover, this abstraction also helps to make it easier to move between different SaaS providers, should the need arise. In this way, SaaS can provide a more agile approach, allowing organisations to quickly adjust to changes in their technology stack.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Perfect Combination
&lt;/h1&gt;

&lt;p&gt;For most COTS solutions, SaaS offers the best of both worlds - combining the configurability of COTS with the benefits of cloud-hosted software. Companies can focus on using the software to solve business problems rather than maintaining infrastructure and OSes. The SaaS model offers a clear path forward for legacy COTS applications. SaaS also provides cost savings, as companies no longer need to maintain their own datacenter or IT infrastructure, which can be a major expense. Cloud-hosted software also allows for continuous delivery of software updates, ensuring that organisations are always using the latest version of the product. Furthermore, SaaS products are typically more secure than self-hosted solutions, as they are hosted in highly secure cloud environments. Finally, SaaS solutions often have the benefit of greater scalability.&lt;/p&gt;

&lt;h1&gt;
  
  
  When Self-Hosting Still Makes Sense
&lt;/h1&gt;

&lt;p&gt;There are cases where hosting COTS on AWS may still be preferable - primarily when the application requires deep customization at the code level. If your company has invested heavily in customising around your COTS software, migrating those customizations to a SaaS version may not be feasible. But for most applications, SaaS delivers significant advantages over both on-premises and IaaS-hosted COTS. For companies that have invested in customizations, a hybrid approach of hosting COTS on AWS while migrating to SaaS for other workloads may be the most practical solution. This approach allows you to benefit from the scalability and cost savings offered by SaaS, while still leveraging the customizations made to your COTS applications. In the end, the best approach for your company will depend on its unique needs and requirements.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Roadmap to SaaS
&lt;/h1&gt;

&lt;p&gt;For companies relying on COTS, SaaS should be the target destination whenever possible. Start by auditing current COTS applications and their custom integrations. Identify which can be easily migrated to SaaS, and which require deeper planning. Create a roadmap to move suitable applications first, while reformulating custom integrations around COTS to prepare them for future SaaS transitions. With the right roadmap, most companies can modernise their COTS software stack by moving it from on-premises and IaaS to SaaS.&lt;/p&gt;

</description>
      <category>cots</category>
      <category>saas</category>
      <category>integration</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>Cloud Native requires DevOps</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Tue, 14 Nov 2023 13:43:59 +0000</pubDate>
      <link>https://forem.com/bramverhagen/cloud-native-requires-devops-1334</link>
      <guid>https://forem.com/bramverhagen/cloud-native-requires-devops-1334</guid>
      <description>&lt;p&gt;The world of software development is changing, and cloud native architecture is becoming increasingly popular. But to succeed with this approach, DevOps must be implemented. In this blog post, I'll explore the differences between traditional software development and DevOps. The cultural shift needed to adopt DevOps. And all of the new opportunities for software development that arise when DevOps is embraced. By understanding how cloud native architecture requires DevOps for success, businesses can gain a competitive advantage and unlock the potential of cloud native technology.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview of cloud native architecture
&lt;/h1&gt;

&lt;p&gt;Cloud native architecture is a type of software architecture. It enables companies to quickly build, deploy, and scale applications and services. By leveraging cloud computing technologies to create a flexible and agile environment for developing applications. Cloud native architectures are designed with distributed systems in mind and are based on microservices.&lt;br&gt;
Unlike traditional software development models, cloud native architectures are not bound by the same constraints. Instead, they allow organisations to take advantage of the scalability and flexibility provided by this architecture. This means that businesses can move faster to develop more powerful applications and services.&lt;br&gt;
However, it is important to note that adopting a cloud native architecture is not just about using cloud native services. To succeed, it requires an organisational culture shift towards DevOps. DevOps is a culture that brings together people, processes, and technology to speed up the development process while improving quality control.&lt;br&gt;
The benefits of transitioning to a cloud native architecture include improved scalability due to its distributed system design. Improved time-to-market as teams work faster together using DevOps practises. Increased agility when responding quickly to customer needs or dealing with unexpected issues. Improved cost savings as resources are used more efficiently across teams. Businesses can also benefit from security features like automatic security checks during deployments or regular vulnerability scans.&lt;br&gt;
Despite all these advantages though, there are still challenges associated with transitioning into a cloud native architecture. Such as making sure teams have access to the right skillset required for efficient adoption of DevOps culture. Understanding how existing departments need restructuring during transition. Navigating cultural changes within the organisation. Or managing the complexity of distributed systems. But with proper planning and support from DevOps experts, these obstacles can be overcome successfully.&lt;/p&gt;

&lt;h1&gt;
  
  
  The classic approach vs. DevOps
&lt;/h1&gt;

&lt;p&gt;Organisations are increasingly transitioning away from traditional software development models and embracing cloud native architecture with DevOps. This iterative approach to software development is more efficient than the classic one. Resulting in faster feedback loops and continuous improvement. It requires a complete cultural change but is worth it as it provides competitive advantages such as the quick response to changing markets and - customer demands. &lt;br&gt;
DevOps allows teams to build, test, deploy, and maintain applications quickly and efficiently. This while making sure their security through automated processes that are continuously tested and monitored. Continuous Integration &amp;amp; Continuous Deployment (CI/CD) pipelines prevent potential threats while releasing high-quality code quickly without compromising safety regulations or data privacy laws. By encouraging organisational change, companies can unlock the full potential of DevOps with cloud native architecture allowing them to innovate quickly at scale.&lt;/p&gt;

&lt;h1&gt;
  
  
  The culture shift to DevOps
&lt;/h1&gt;

&lt;p&gt;As organisations move to cloud native, they need to use DevOps to get the most out of it. In traditional software development approaches, the development and operations teams usually operate separately, with little collaboration between them. On the other hand, DevOps facilitates collaboration between the teams, improving agility and reducing cycle times and handovers.&lt;br&gt;
To make the most of this cultural shift, organisations should focus on process improvements and automation. Automation is essential for speeding up deployments while minimising errors and protecting consistency across environments. Automation also helps reduce costs by cutting manual processes that require time-consuming, error prone intervention from developers.&lt;br&gt;
Continuous learning and improvement are also key components of a successful DevOps culture. This requires creating an environment where team members are encouraged to experiment with new technologies and approaches to find better solutions to existing problems. Through continuous learning, teams can keep up with trends in their industry and stay ahead of their competition.&lt;br&gt;
In conclusion, adopting a DevOps culture is essential for organisations transitioning to cloud native architecture. By focusing on process improvements, automation, continuous learning and improvement, organisations can unlock the full potential of cloud native architecture.&lt;/p&gt;

&lt;h1&gt;
  
  
  New software development enabled by DevOps
&lt;/h1&gt;

&lt;p&gt;The introduction of DevOps has enabled businesses to revolutionise their software development and deployment processes. Through taking advantage of automation, shortening development cycles, scalability, and continuous integration and delivery. Businesses can now quickly create applications that fulfil customer requirements while also reducing the expenditure associated with development. Organisations can stay agile in a rapidly changing market by rapidly responding to customer needs. And innovating faster than their competition, helping them stay ahead. Cloud native architecture gives companies the boost they need to create applications that can easily expand or contract, all driven by DevOps at the core. Enabling them to stay ahead of the curve while still keeping costs low.&lt;/p&gt;

</description>
      <category>awscommunity</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>development</category>
    </item>
    <item>
      <title>Why Cloud Computing is Not Just Another Datacenter Solution - Understanding the Difference</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Thu, 26 Oct 2023 12:16:21 +0000</pubDate>
      <link>https://forem.com/bramverhagen/why-cloud-computing-is-not-just-another-datacenter-solution-understanding-the-difference-4mkf</link>
      <guid>https://forem.com/bramverhagen/why-cloud-computing-is-not-just-another-datacenter-solution-understanding-the-difference-4mkf</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;After encountering yet another business where they saw the cloud as their next datacenter, I decided to explain a bit about the differences.&lt;br&gt;
Datacenters have been a part of business for many decades, providing a reliable and secure place for critical data and applications. Recently, however, cloud computing has emerged as an alternative to traditional datacenter solutions. Cloud computing provides businesses with a more agile and cost-effective approach to running their operations. What enabling them to respond quickly to their customer needs. In this article, I will explore the evolution of datacenters and examine the various benefits of cloud computing. I will also look at how cloud computing can be used for software development agility and business culture transformation. Finally, we will discuss how organisations can unlock the value of cloud computing in the enterprise.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Evolution of Datacenter Solutions
&lt;/h1&gt;

&lt;p&gt;Datacenters have been a pillar of business operations for many years. In the past, these solutions were typically designed with physical infrastructures requiring investments in equipment and personnel to manage them. Over time, there has been an evolution towards more efficient means of data storage and application management. &lt;br&gt;
Virtualization is such a development, which allows multiple applications to be hosted on one server. This allowed organisations to scale up or down as needed without significant additional hardware investment. Also, virtualization provides greater flexibility for transferring resources between departments and projects.&lt;br&gt;
Recently, cloud computing has emerged as an alternative to traditional datacenter solutions. This technology offers scalability and cost efficiency that can't always be achieved through physical infrastructure-dependent methods. Additionally, cloud computing provides enhanced security features like integrated disaster recovery, which makes it more reliable than other options. Lastly, users can access a broad range of services from storage to analytics tools at far lower costs than traditional solutions require.&lt;br&gt;
The popularity of cloud computing is increasing rapidly in enterprises across the globe due to its cost savings and improved flexibility over conventional datacenters. Therefore, understanding the differences between these two models will be fundamental for seizing all potential benefits from this agile and economical way of doing business.&lt;/p&gt;

&lt;h1&gt;
  
  
  Exploring the Business Benefits of Cloud
&lt;/h1&gt;

&lt;p&gt;The potential benefits of cloud computing have become increasingly clear in today's business world. With its scalability, cost-effectiveness, and ability to facilitate quick deployment, it is no wonder that companies of all sizes are taking advantage of the technology. Cloud solutions enable businesses to quickly respond to their customers needs. Changing IT costs-models by shifting from capital expenditures (CAPEX) to operational expenditures (OPEX). As well, developers can deploy applications with speed and agility due to the absence of lengthy deployment processes. Taken together, these characteristics demonstrate why cloud computing is a powerful tool for businesses looking to stay ahead in a competitive landscape. &lt;br&gt;
Moreover, the cultural changes enabled by cloud solutions should not be overlooked either. Teams are empowered with greater efficiency and flexibility. They have access to new tools when needed instead of waiting until budgets allow long implementation processes. What enables them to develop innovative products and services fast. Finally, DevOps engineers automate testing procedures, deploying code faster without sacrificing quality assurance standards. Ultimately leading to improved customer satisfaction.&lt;br&gt;
In summary, cloud computing has firmly established itself as an invaluable asset for businesses seeking agility and cost savings. This allows them to remain competitive in an ever-evolving digital landscape. From scalability benefits and CAPEX to OPEX shifts to empowering teams within organisations via quicker deployments. Using the power of cloud computing presents numerous opportunities for enterprises large or small.&lt;/p&gt;

&lt;h1&gt;
  
  
  Maximising Cloud for Business Enablement
&lt;/h1&gt;

&lt;p&gt;Cloud computing is a game-changer for businesses that want to maximise their abilities and stay competitive in an ever-evolving market. It allows companies to scale up or down quickly and cost-effectively, without having to commit to long-term contracts or invest in expensive hardware infrastructure. Cloud computing lets organisations access powerful technologies like AI and ML, which can be used for predictive analytics. They also provide tools like containers and continuous integration/deployment (CI/CD) pipelines to improve application deployment. Security measures such as encryption at rest and data protection are also default standards with cloud providers. Allowing companies to store customer data safely while following applicable laws and regulations.&lt;br&gt;
In short, cloud computing provides the agility needed by businesses of all sizes to remain competitive while minimising overhead costs associated with traditional datacenter solutions. Companies can access the latest technologies without worrying about security risks.&lt;/p&gt;

&lt;h1&gt;
  
  
  Harnessing the Power of Cloud for Agile Software Development
&lt;/h1&gt;

&lt;p&gt;Cloud computing has revolutionised software development and deployment. Cloud services for agile software development lets developers quickly get new resources without spending money on hardware. Open source development models and platforms have also enabled rapid development cycles. Which allows developers to quickly iterate on their codebase and expand their capabilities.&lt;br&gt;
The cost savings associated with using cloud services are another major advantage for businesses looking to enable agile software development. Cloud providers offer pay-as-you-go pricing plans that can help businesses save money on hardware investments. They only pay for the resources they need when they need them. Cloud services cut the need for in-house IT staff, further cutting costs associated with development projects.&lt;br&gt;
Best practises can also help businesses get the most out of their cloud computing investments regarding agile software development. Setting up a proper DevOps process is essential for enabling effective collaboration between teams and streamlining the delivery of new features. Automation tools such as CI/CD pipelines can help automate manual processes and reduce errors associated with manual deployments while also speeding up time to market. Setting up observability systems can provide valuable insights into application performance and usage patterns that can be used to optimise system performance over time.&lt;br&gt;
Finally, using cloud services for DevOps processes like CI/CD is a way to get all the benefits with a safe and secure deployment. CI/CD pipelines allow developers to continuously push small changes into production without risking errors or downtime. This level of automation enables rapid iteration cycles while making sure that applications remain secure throughout deployment cycles.&lt;br&gt;
By understanding how these technologies work together and following best practises for using them effectively, businesses can unlock the full potential of cloud computing while maximising organisational agility.&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding how Cloud Impacts Business Culture
&lt;/h1&gt;

&lt;p&gt;The datacenter IT model often relied heavily on a select group of IT specialists to handle all aspects of technology, from infrastructure management to software development. In contrast, cloud computing encourages a do-it-yourself ethos. There where various teams and individuals across the organisation become more self-sufficient in managing their IT needs.&lt;br&gt;
This shift involves empowering employees beyond the IT department to take ownership of their technology requirements. Whether it's provisioning resources in the cloud, customising applications, or analysing data. Cloud providers make tasks that used to require special knowledge more accessible to non-technical staff.&lt;br&gt;
However, for this transformation to be truly effective, it must extend to every facet of the organisation. It's no longer solely the responsibility of the IT department to handle software development. Instead, departments like marketing, sales, and finance should be encouraged to engage with the cloud. This change in culture makes the company more agile and responsive to changes in the market.&lt;br&gt;
Businesses can invest in training and development to make employees skilled in cloud technology. This will help them use the technology better. Also, it's important to have open communication, collaboration, and sharing of knowledge. Organisations can use the cloud's benefits more efficiently by doing so. Which allows for a more agile, efficient, and innovative approach to technology use. This transformation is not just about cloud technology, which is now part of the organisation. It makes the organisation more competitive and successful in the digital age.&lt;/p&gt;

&lt;h1&gt;
  
  
  Concluding: Unleashing the Value of Cloud in the Enterprise
&lt;/h1&gt;

&lt;p&gt;In short, cloud computing provides the tools necessary to remain agile and responsive in today's digital age.&lt;br&gt;
The possibilities offered by cloud computing in the enterprise are vast and far-reaching. Companies can save money, gain agility, access new technologies, improve collaboration, and meet data protection and privacy standards by using features. Cloud solutions are like the turbo boost that will help businesses reach the finish line faster and with more style. In today's digital landscape, cloud computing isn't just an IT revolution - it's a mindset that must be adopted if you want to unlock the true potential of innovation and competitiveness. Cloud computing reduces operational and infrastructure costs while providing easy access to resources, applications, and services. In short, cloud computing can help you stay agile and responsive in today's digital age - so get ready to take the leap!&lt;/p&gt;

</description>
      <category>awscommunity</category>
      <category>cloudcomputing</category>
      <category>businesschange</category>
      <category>devops</category>
    </item>
    <item>
      <title>History of AWS Well-Architected</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Mon, 25 Sep 2023 13:14:22 +0000</pubDate>
      <link>https://forem.com/bramverhagen/history-of-aws-well-architected-3k2k</link>
      <guid>https://forem.com/bramverhagen/history-of-aws-well-architected-3k2k</guid>
      <description>&lt;h1&gt;
  
  
  Well-Architected Initiative 2012
&lt;/h1&gt;

&lt;p&gt;In 2012 Philip "Fitz" Fitzsimons started the Well-Architected initiative as he explains in &lt;a href="https://aws.amazon.com/blogs/architecture/on-architecture-and-the-state-of-the-art/"&gt;his blog&lt;/a&gt;. In this blog Fitz references Roman Architect Vitruvius Pollio and Henry Harrison Suplee, who in his view formed the basis for architecture reuse.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Both authors wrote books that captured the current knowledge on design principles and best practices (in architecture and engineering) to improve awareness and adoption.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Design principles and best practices form the basis for Well-Architected. The initiative was started to share best practices for architecting in the cloud. AWS Architect start writing and sharing architectures that were cloud native.&lt;/p&gt;

&lt;h1&gt;
  
  
  Well-Architected Framework 2015
&lt;/h1&gt;

&lt;p&gt;During Re:Invent 2015 Amazon CTO Dr. Werner Vogels announce the white paper AWS Well-Architected Framework.&lt;br&gt;
Where frameworks like TOGAF and Zachman concentrate on a centralized  enterprise architecture capability. AWS prefers to distribute capabilities into teams rather than having a centralized team with that capability. The Well-Architected Framework therefor talks about &lt;em&gt;practices&lt;/em&gt; and &lt;em&gt;mechanisms&lt;/em&gt; which should be adopted by all teams.&lt;br&gt;
The white paper describes the 4 pillars of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Reliability&lt;/li&gt;
&lt;li&gt;Performance efficiency&lt;/li&gt;
&lt;li&gt;Cost Optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Well-Architected Review 2016
&lt;/h1&gt;

&lt;p&gt;In September 2016 Enterprise support customers get access to the Well-Architected Review.&lt;br&gt;
Delivered by an AWS Solutions Architect, the review provides guidance and best practices on the four pillars of the Framework.&lt;/p&gt;

&lt;h1&gt;
  
  
  5th Pillar 2016
&lt;/h1&gt;

&lt;p&gt;The Framework introduces Operational Excellence as the fifth pillar. This pillar looks at the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.&lt;/p&gt;

&lt;h1&gt;
  
  
  Well-Architected Partner Program 2017
&lt;/h1&gt;

&lt;p&gt;During Re:Invent 2017 Well-Architected Partner Program was announced. With this program the conduction of the Well-Architected review was moved to the AWS partner network. AWS partners were now responsible to carry out the review instead of the AWS solutions architects.&lt;br&gt;
At the moment 31 partners were selected &lt;em&gt;Launch Partners&lt;/em&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  6th Pillar 2021
&lt;/h1&gt;

&lt;p&gt;On the global movement of decarbonization AWS announced during Re:Invent 2021 the 6th Pillar of the Well-Architect Framework. The Sustainability Pillar focusses on the environmental footprint of your workload. It introduces the Shared Responsibility Model of Cloud Sustainability.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>wellarchitected</category>
      <category>awscommunity</category>
      <category>cloud</category>
    </item>
    <item>
      <title>The need for a cloud landing platform</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Thu, 16 Feb 2023 15:03:59 +0000</pubDate>
      <link>https://forem.com/bramverhagen/the-need-for-a-cloud-landing-platform-31eg</link>
      <guid>https://forem.com/bramverhagen/the-need-for-a-cloud-landing-platform-31eg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Cloud-based services have become increasingly popular, but they're not always easy to understand. That's why you should consider using a cloud landing platform that can help you navigate the world of cloud computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Need for a Cloud Landing Platform
&lt;/h2&gt;

&lt;p&gt;The need for a cloud landing platform is critical because enterprises are subject to all kinds of rules and regulation regarding data. For example, many countries have strict rules about how personal information should be handled by enterprises. The problem here is that it takes time for developers to build applications that work with different regulatory regimes around the world, which means they often end up not using cloud at is full capacity.&lt;/p&gt;

&lt;p&gt;Cloud landing platforms allow organizations to manage their workloads in one place--and make sure those resources are compliant with any (local) laws or regulations you might need to follow along with. With this kind of control at your fingertips, you can do more than just develop apps: You'll also be able get started on building new technologies faster than ever before!&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Cloud Landing Platform?
&lt;/h2&gt;

&lt;p&gt;Cloud landing platforms are a place where you can bring your cloud workloads, providing them with governance and guardrails, as well as a common set of tools and services. These platforms provide a single-entry point for all your business applications so that developers easily find their way into the cloud.&lt;/p&gt;

&lt;p&gt;The goal of a landing platform is to create a standard for how your business applications are built and deployed. The idea behind this is that you want to be able to take any application from any source, whether it’s from an internal team or an external partner, and easily get it deployed in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use A Cloud Landing Platform?
&lt;/h2&gt;

&lt;p&gt;You want to easily deploy your applications to the public cloud. Your developers can use the cloud landing platform to easily manage multiple accounts, users and teams.&lt;/p&gt;

&lt;p&gt;You need to scale up your application team by adding new members or outsourcing some of their duties (e.g., data migration). The cloud landing platform allows you to manage those changes in one place with ease.&lt;/p&gt;

&lt;p&gt;You want to easily migrate your applications from on-premises to the cloud. The cloud landing platform allows you to manage multiple accounts and projects in one place. You need a single tool for managing all of your infrastructure. The Cloud Landing Platform makes it easy for you to manage all of your cloud infrastructure.&lt;/p&gt;

&lt;p&gt;You want to be able to manage your cloud applications from a single place. The Cloud Landing Platform makes it easy for you to manage multiple accounts, users and teams. You need a single tool for managing all of your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two principles of a cloud landing platform
&lt;/h2&gt;

&lt;p&gt;Cloud computing has the potential to revolutionize how companies operate. However, there are still many challenges associated with cloud computing that must be addressed before it can be fully realized. One of these challenges is ensuring that users have a secure way of accessing their data in the cloud and that they have access controls on what can be done with it once accessed.&lt;/p&gt;

&lt;p&gt;Cloud landing platforms provide a solution for this problem by providing governance and guardrails so that users know what actions are permitted and what actions are prohibited when using any given service or application on their behalf through this platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  A cloud landing platform is needed for enterprises.
&lt;/h2&gt;

&lt;p&gt;Cloud-based platforms are an ideal solution for enterprises because they provide a single point of entry and single billable entity. With this in mind, it's important that you have a cloud landing platform that can help you govern your cloud workloads effectively so that they don't become an unmanageable problem for your company.&lt;/p&gt;

&lt;p&gt;A good governance system will help ensure that all aspects of the business are handled efficiently and effectively using one set of processes across all departments or teams within an organization. It also ensures that projects aren't duplicated across different teams or departments so as not to create confusion about who should be responsible for what part of the process or task at hand (i.e., "it's my job!").&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We are sure that you have a lot of questions about landing platform and how they can help your business. We hope this post has answered some of those questions and given you a better understanding of what it takes to make your website or app successful on the web.&lt;/p&gt;

&lt;p&gt;If there’s anything else we can answer for you, please don’t hesitate to reach out! We love hearing from our customers and always appreciate feedback from industry professionals.&lt;/p&gt;

&lt;p&gt;We hope this article has been helpful to those looking for more information about Cloud Landing Platforms and how they can be used effectively in their businesses!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>enterprise</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>AWS Well Architected Framework &amp; I</title>
      <dc:creator>Bram Verhagen</dc:creator>
      <pubDate>Mon, 24 Oct 2022 14:53:41 +0000</pubDate>
      <link>https://forem.com/bramverhagen/aws-well-architected-framework-i-4mmi</link>
      <guid>https://forem.com/bramverhagen/aws-well-architected-framework-i-4mmi</guid>
      <description>&lt;h1&gt;
  
  
  My journey to the cloud
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Where it all started
&lt;/h2&gt;

&lt;p&gt;In the late 90s I got my first PC and turned it into an Ubuntu server with Apache. I forwarded port 80 in my ISP NAT router and tried to reach the default website from within my own home network. It worked; I could reach it! 😄 😄&lt;br&gt;
Little did I know that it wasn't available from the outside world...&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting into IT
&lt;/h2&gt;

&lt;p&gt;A step in time. I had my fair share of failed studies and jobs and was at a turning point in life. I needed to do something, I couldn't go on like this...&lt;br&gt;
As you do, as you need to overthink your life... I went to the pub.&lt;br&gt;
There I bumped into an old colleague and talked about where to go next. And there he spoke the legendary words; "You like computers, don't you? Why don't you join us in IT?"&lt;br&gt;
Well, if 'IT' was working with computers... Why not...&lt;br&gt;
I lent myself a job and I worked myself up from service desk to system administrator and later solutions architect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn about cloud
&lt;/h2&gt;

&lt;p&gt;Of course, I heard about cloud during those years. But I didn't pay that much attention, enough fun on-prem. Till more and more customers were 'going to the cloud'. Halfway the 10s I toke the leap and googled 'how to architect in the cloud'.&lt;br&gt;
I learned a lot and stumbled upon the Well Architected Framework. Something that back then was a pure AWS affair.&lt;br&gt;
What stood out immediately was the principle "stop guessing capacity". Being in pre-sales architect at the time that could be considered my fulltime job... The Well Architected Framework had me. I read it cover to cover.&lt;/p&gt;

&lt;h1&gt;
  
  
  Brief History of the Well Architected Framework
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The start
&lt;/h2&gt;

&lt;p&gt;Like many architects before them, at AWS the architects believed that capturing and sharing best practices leads to better outcomes. In 2012 Philip "Fitz" Fitzsimons and his team started an initiative called Well-Architected. A push to share the best practices for architecting in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Well Architected Framework
&lt;/h2&gt;

&lt;p&gt;In October 2015 Jeff Barr announced the Well Architect Framework with his blog &lt;a href="https://aws.amazon.com/blogs/aws/are-you-well-architected/"&gt;"Are you Well-Architected?"&lt;/a&gt;.&lt;br&gt;
The framework at that moment wat based around four pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Reliability&lt;/li&gt;
&lt;li&gt;Performance Efficiency&lt;/li&gt;
&lt;li&gt;Cost Optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each pillar represents a part of a set of design principles.&lt;br&gt;
The effort came as an answer to the request from customers to be more prescriptive in there advise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Well-Architected Review
&lt;/h2&gt;

&lt;p&gt;Soon after the introduction AWS Solution Architect started assessing workloads against the framework. In September the Well-Architected Review became an official part of Enterprise Support.&lt;/p&gt;

&lt;h2&gt;
  
  
  5th pillar
&lt;/h2&gt;

&lt;p&gt;Customers asked more guidance on the operation of the workloads. That's why Fitz in October announce a 5th pillar. The pillar of Operational Excellence with design principles around monitoring and continual improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  6th pillar
&lt;/h2&gt;

&lt;p&gt;On the global trend of taking better care of the planet we life on, at Re:Invent 2022 Werner Vogels announce the 6th pillar. The Sustainability pillar, containing the principles about reducing carbon footprint.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where I am today
&lt;/h1&gt;

&lt;p&gt;The Well-Architected Framework didn't let me get away. Today I'm technical leading a stream of AWS professionals for an AWS partner. And I'm AWS Solutions Architect Professional certified and entitled to run Well-Architected Review for our customers.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>cloud</category>
      <category>awsarchitecture</category>
    </item>
  </channel>
</rss>
