<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matthew Thomas</title>
    <description>The latest articles on Forem by Matthew Thomas (@mthomas564).</description>
    <link>https://forem.com/mthomas564</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mthomas564"/>
    <language>en</language>
    <item>
      <title>Azure Config as Code</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Wed, 29 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/azure-config-as-code-2o67</link>
      <guid>https://forem.com/mthomas564/azure-config-as-code-2o67</guid>
      <description>&lt;p&gt;If you've read any of my other posts, you'll know that I am a fan of Azure App Configuration. I've specifically spoken about its use for Feature Flag management in my &lt;a href="https://dev.to/posts/feature-flags-what-and-why"&gt;feature flag what and why&lt;/a&gt; and &lt;a href="https://dev.to/posts/tbd-feature-flagging"&gt;TBD feature flagging&lt;/a&gt; posts.&lt;/p&gt;

&lt;p&gt;It is a great tool for managing both configuration and feature flags, and something that I have been working with development teams on recently to implement. Lately we've been discussing using it as the main store for an application, including tenant information that will be used with FinBuckle, and this raised some questions about how to properly manage it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problems
&lt;/h2&gt;

&lt;p&gt;Firstly, when managing it, you have to go into the Azure Portal, go to the resource and then manage the values directly. This is okay for a small number of values or if you have simple values. As mentioned, we want to use it for the main store for an application, and we therefore want to be able to store complex values just as JSON objects. On top of this, there is no real way to version within it, yes, you can take snapshots and add tags, but it's not really version control.&lt;/p&gt;

&lt;p&gt;Secondly, there is no way to control the network access, it's open or closed, not ideal when you want to have it private but also be able to manage it.&lt;/p&gt;

&lt;p&gt;These points make it feel not very enterprise-ready, whilst a brilliant tool. We also use API Management, and therefore use &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/example-scenario/devops/automated-api-deployments-apiops" rel="noopener noreferrer"&gt;APIOps&lt;/a&gt; and this is where the inspiration came from...&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;I want to make managing App Configuration as easy as possible whilst supporting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex Values&lt;/li&gt;
&lt;li&gt;Version Control and Rollback&lt;/li&gt;
&lt;li&gt;Network Access Control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and making it enterprise-ready.&lt;/p&gt;

&lt;p&gt;To do this, I created &lt;a href="https://github.com/MThomas564/azure-config-as-code" rel="noopener noreferrer"&gt;Azure Config as Code&lt;/a&gt;. This is a template that can be cloned into your own environment to help manage App Config in a code style.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does it do?
&lt;/h3&gt;

&lt;p&gt;It covers a few elements to enable you to properly manage your App Config:&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration as Code
&lt;/h4&gt;

&lt;p&gt;Store your application configuration in version-controlled JSON files, which can then support complex values. A file is created for each environment, allowing you to separately manage each environment, as they will have different values.&lt;/p&gt;

&lt;p&gt;The files are stored within a schema I have created which supports the following types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;String&lt;/li&gt;
&lt;li&gt;JSON&lt;/li&gt;
&lt;li&gt;JSON Array&lt;/li&gt;
&lt;li&gt;Feature Flag&lt;/li&gt;
&lt;li&gt;Key Vault&lt;/li&gt;
&lt;li&gt;Default plain text&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Feature Flag and Key Vault types are specific to support the import of Feature Flags, and Key Vault references, meaning all aspects can be stored in Git.&lt;/p&gt;

&lt;h4&gt;
  
  
  Flattening Complex Structures
&lt;/h4&gt;

&lt;p&gt;From the configuration as code, it then is flattened into a KVSet format which can then be imported into App Config. As part of this, it can automatically flatten complex structures such as JSON objects and arrays to allow them to be stored in App Config safely. However, as they are stored as a complex object in Git, you can edit them properly and manage them. This takes something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Finbuckle:MultiTenant:Stores:ConfigurationStore:Tenants"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"910eb0bf-6a57-454b-8458-f3e299591684"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Identifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"910eb0bf-6a57-454b-8458-f3e299591684"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Test1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Keys"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TestKey1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"53d9262b-a83a-4f31-b4bc-213187a59eb2"&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b987448b-5418-4ef8-8a71-f55ea16cf156"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Identifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b987448b-5418-4ef8-8a71-f55ea16cf156"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Test2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Keys"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TestKey2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"8327d419-f013-490e-8466-8fb6f0a5a15e"&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"json"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and converts it to the following KVSet object with serialised JSON Values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Finbuckle:MultiTenant:Stores:ConfigurationStore:Tenants"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"[{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Id&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;910eb0bf-6a57-454b-8458-f3e299591684&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Identifier&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;910eb0bf-6a57-454b-8458-f3e299591684&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Test1&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Keys&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:[{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;TestKey1&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Value&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;53d9262b-a83a-4f31-b4bc-213187a59eb2&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}]},{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Id&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;b987448b-5418-4ef8-8a71-f55ea16cf156&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Identifier&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;b987448b-5418-4ef8-8a71-f55ea16cf156&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Test2&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Keys&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:[{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;TestKey2&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Value&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;8327d419-f013-490e-8466-8fb6f0a5a15e&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}]}]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"content_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"application/json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Version Control and Rollback
&lt;/h4&gt;

&lt;p&gt;The files are all stored in Git, so naturally you can use Git to version control them and track the changes, as well as support pull requests for proposed changes.&lt;/p&gt;

&lt;p&gt;On top of this, the pipeline templates are set up to create an immutable rollback point by publishing the flattened KVSets as build artifacts. These can then be retained and used to rollback to previous versions as needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Network Access Control
&lt;/h4&gt;

&lt;p&gt;Now, to say that this template fixes this is a bit of a stretch. Instead, what it does is setup the framework to allow you to implement this using your own DevOps Agents. Realistically, if you are working in a private environment, you will have, or want to have, private DevOps Agents within your network. This will allow them to access the App Config instances over private networking. The templates are also set in such a way to allow you to use a different pool for each stage/environment, and therefore allow you to have different access levels for each.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dry Run and Approval
&lt;/h4&gt;

&lt;p&gt;The pipelines also include a step to carry out a dry run import of the configuration. This then returns a list of what changes are going to be made, allowing a manual check before pushing. To further enhance this, a manual approval step is included to allow appropriate approvers to be notified and be required to approve the changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;If this is something that you are interested in using, you can head over to the &lt;a href="https://github.com/MThomas564/azure-config-as-code" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt; and clone the repository.&lt;/p&gt;

&lt;p&gt;From here you will need to update the pipeline variables to suit your environment. Use the dev.json file in the config folder as a guide for creating your own config files.&lt;/p&gt;

&lt;p&gt;Import the pipelines into your DevOps environment, and you should be good to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is a template that I have created to help the management of App Configuration in a code style. It doesn't replace the Azure Portal, but it does make it a more enterprise-ready solution that can be managed using CI/CD.&lt;/p&gt;

&lt;p&gt;I have some additional features that I plan to add to this over the coming weeks, so stay tuned! If there is something specific that you'd like to see it do, feel free to raise an issue on the GitHub repo.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>appconfiguration</category>
      <category>devops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Feature Flags in Trunk-Based Development</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Tue, 14 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/feature-flags-in-trunk-based-development-81p</link>
      <guid>https://forem.com/mthomas564/feature-flags-in-trunk-based-development-81p</guid>
      <description>&lt;p&gt;Firstly, what is Trunk-Based Development (TBD)? The idea behind TBD is to have one main branch that multiple developers commit lots of smaller changes to. This main branch is the 'trunk' of the codebase. Developers should be committing frequently, and the main trunk should always be deployable.&lt;/p&gt;

&lt;p&gt;Feature flags, as discussed &lt;a href="https://dev.to/posts/feature-flags-what-and-why"&gt;here&lt;/a&gt;, are flags that allow functionality and features to be turned on and off without deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need them specifically in TBD?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Decoupled deployments
&lt;/h3&gt;

&lt;p&gt;Adding flags for a feature means that we can deploy the code decoupled from a release. This means we could deploy bug fixes to production, but without enabling that new feature that might still be undergoing testing.&lt;/p&gt;

&lt;p&gt;Why do we need this? Back to that trunk, we have one main branch that should always be deployable. This is especially important when a system goes into an operations stage; we need to be able to address functionality that isn't working as intended, but also be able to release new features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Progressive delivery
&lt;/h3&gt;

&lt;p&gt;Progressive delivery allows a feature to be rolled out, but not to everyone. Think A/B testing or canary releases: we let a subset of users see a new feature or change behind a flag. A challenge of releases is that you test as much as you can, but real end users always find something different. Be it they use it slightly differently, on a different device, etc.&lt;/p&gt;

&lt;p&gt;Using canary or A/B lets the change be tested with real users in a real environment, but without it being available to all users at once. This way, it gets tested with a subset, and you are able to get targeted feedback. Plus, if there is an issue, you don't affect the whole user base.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved rollback
&lt;/h3&gt;

&lt;p&gt;When a new feature is rolled out, there is always a possibility that something goes wrong. It could be a small bug, or it could flat out not work. Without flags, the only way to roll back would be to redeploy the previous version, and depending on the deployment and hosting strategy, this could be a long process causing even more downtime.&lt;/p&gt;

&lt;p&gt;Now, with flags, we can 'roll back' by just turning off the flag. The flag goes off, the feature is disabled, and the system is back operational. This is an incredibly fast way to mitigate issues, which is great for the operations piece.&lt;/p&gt;

&lt;h2&gt;
  
  
  The challenges
&lt;/h2&gt;

&lt;p&gt;With any technique there are always challenges, and feature flags are no different.&lt;/p&gt;

&lt;h3&gt;
  
  
  Too many or stale flags
&lt;/h3&gt;

&lt;p&gt;Creating too many flags can lead to a messy codebase and creates a constant loop of technical debt. With that, the more flags that get added, the higher the chance of stale flags—ones that are created and never used or tidied up.&lt;/p&gt;

&lt;p&gt;To try and avoid this, clear naming conventions and lifecycle policies are important. Ensure that flags have an owner who is responsible for their lifecycle. As part of this, as the feature is completed and fully rolled out, the flag should be removed. Now, sometimes this might not be the way; depending on the feature, you might want to keep the flag to be able to turn it off in case of issues, but this should be a conscious decision.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code complexity
&lt;/h3&gt;

&lt;p&gt;Implementing feature flags is a relatively simple piece of work, especially if you use a prebuilt library or service. However, the addition of flags creates additional complexity regardless, as you have to manage the different branches of the code.&lt;/p&gt;

&lt;p&gt;Where possible, you should use flags to encapsulate a complete feature, and avoid using them throughout the business logic. This way, the code remains as clean as possible but can still be toggled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing complexity
&lt;/h3&gt;

&lt;p&gt;As we said above, adding more flags creates more branches through the code—more routes that the application flow can take. This means that testing needs to become more complex to support this. We still need to be able to test the application as a whole, including the flags.&lt;/p&gt;

&lt;p&gt;This can be mitigated by limiting the scope of flags, ideally one flag per feature. Doing so helps to limit the complexity, as there are only two states for the feature: on or off. If there are multiple flags that interact for a feature, the testing must cover each of the combinations, which can quickly grow out of hand.&lt;/p&gt;

&lt;p&gt;Automated testing is also key to help with this; tests can be run frequently and help to check each of the possible paths without a developer needing to manually test each one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance and ownership
&lt;/h3&gt;

&lt;p&gt;As mentioned above, having clear ownership of flags is important. This includes the lifecycle of the flag, but also its usage.&lt;/p&gt;

&lt;p&gt;A flag should have a clear owner to ensure that someone is responsible for it. This helps to avoid the situation where a flag gets created and then forgotten about because it's just used and then that's that. The owner is also responsible for checking that the flag is actually used, and if not, to remove it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visibility and tooling
&lt;/h3&gt;

&lt;p&gt;As I've discussed in a previous blog, there are a number of different ways to implement feature flags. Depending on what you use, the visibility of your flags will differ.&lt;/p&gt;

&lt;p&gt;Using a feature flag service like Azure App Configuration will give you a single place to see your flags and their status. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn55uaouhxxc3l8ge5pem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn55uaouhxxc3l8ge5pem.png" alt="Azure App Configuration Feature Flags"&gt;&lt;/a&gt;Here you can see the flags and whether they are enabled or not, giving a quick, easy way to see what features are enabled in your application.&lt;/p&gt;

&lt;p&gt;As someone working in a Microsoft house, I'm always going to be biased towards App Configuration, but other services like LaunchDarkly will offer the same visibility in their own way.&lt;/p&gt;

&lt;p&gt;However, as I have discussed before, you can implement flags just using app settings or environment variables. This is okay for quick development or small projects, but you will not have a simple, nice way to create this visibility. You would need to create your own tooling or dashboards to monitor this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommended practices
&lt;/h2&gt;

&lt;p&gt;There are a few things that should be considered when implementing flags, especially in a TBD environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Treat flags as short-lived; they should not become a permanent piece of configuration or logic.&lt;/li&gt;
&lt;li&gt;Add flag management to the definition of done for a feature. Once fully complete, the flags should be cleaned up if possible.&lt;/li&gt;
&lt;li&gt;Implement flagging 'properly' using an appropriate tool, not by rolling your own in-app settings or similar.&lt;/li&gt;
&lt;li&gt;Automate where possible.

&lt;ul&gt;
&lt;li&gt;Create a custom table to track flag usage, when they were created, by whom, and their current status.&lt;/li&gt;
&lt;li&gt;Use Application Insights or similar to track flag usage in the application; this can help identify stale flags or ones that are not being used.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example implementation
&lt;/h2&gt;

&lt;p&gt;Here’s an example of how a new feature can be implemented using trunk-based development and feature flags.&lt;/p&gt;

&lt;p&gt;The code has been committed directly to the main branch, as expected in TBD. However, because the feature isn’t yet finished or fully tested, it’s hidden behind a flag. This allows the code to be continuously deployed without affecting end users, while ensuring the rest of the team can keep working on other features or bug fixes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IActionResult&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;Checkout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OrderModel&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_featureManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IsEnabledAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"NewCheckoutFlow"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;NewCheckoutService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;LegacyCheckoutService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the legacy checkout flow continues to operate until the team is ready to switch over to the new one. The feature can be tested safely by toggling the flag on and verifying the new path. Keeping the legacy code available means the system can quickly fall back if an issue is discovered. Once testing is complete and the business is confident in the new flow, the flag can be removed and the new implementation becomes the default.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Trunk-based development is a great, simple way to manage a codebase and help developers work together effectively. It does, however, introduce challenges around deployments and ensuring that other fixes or features are not blocked by one another. Properly implementing feature flags can help to mitigate this challenge and keep the code clean, safe, and decoupled from deployments.&lt;/p&gt;

&lt;p&gt;Using both in combination should be a key part of any software development lifecycle to ensure that a team can deliver value effectively and efficiently.&lt;/p&gt;

&lt;p&gt;I’ve seen feature flags thrown in at the last minute before to get something released while hiding another problem. Did it work? Yes — but it also created a mess that was hard to clean up later. Designing your SDLC with feature flags in mind from the start will leave you in a far better place, without needing sticky plasters over deeper issues.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>trunkbaseddevelopmen</category>
      <category>featureflags</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Automating Astro Builds and Deployments</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Fri, 03 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/automating-astro-builds-and-deployments-5alj</link>
      <guid>https://forem.com/mthomas564/automating-astro-builds-and-deployments-5alj</guid>
      <description>&lt;p&gt;I decided on &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fastro.build%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fastro.build%2F" alt="Astro" width="" height=""&gt;&lt;/a&gt; as the best fit for my personal blog; it's simple to use and author with, as it is written in MDX. Primarily, it creates a static site that can be published, making it lightweight to host and fast.&lt;/p&gt;

&lt;p&gt;Recently, I moved all of my DNS management to Cloudflare and wanted to make further use of it, having invested time in setting it up. This led me to discover Cloudflare Workers, which are a free hosting option included in the free plan.&lt;/p&gt;

&lt;p&gt;Now, what problem was I trying to solve?&lt;/p&gt;

&lt;p&gt;I want to be able to write my blogs whenever I wish, which sometimes means more than one in a day, one a week, or even one a month. However, I want to automate the publishing of them, as I could if I had used something like WordPress.&lt;/p&gt;

&lt;h2&gt;
  
  
  What solution did I come up with?
&lt;/h2&gt;

&lt;p&gt;A key element was that the solution had to be free, or at least as inexpensive as possible. I also wanted to avoid hosting it myself in my homelab, as that would then become a 'production' service. I came up with the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write blog posts in MDX and push to GitHub with the publish datetime set in the future in the metadata.&lt;/li&gt;
&lt;li&gt;Run a daily GitHub Action to commit to the site's repository.&lt;/li&gt;
&lt;li&gt;Cloudflare Worker watches for new commits on GitHub and updates the site.&lt;/li&gt;
&lt;li&gt;Zapier monitors the site's RSS feed and pushes new posts into &lt;a href="https://buffer.com/" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt; to publish to LinkedIn.&lt;/li&gt;
&lt;li&gt;Check and update posts in Buffer and allow them to auto-publish to LinkedIn.&lt;/li&gt;
&lt;li&gt;Share on LinkedIn.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltdgwyx1llp30kmqaiyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltdgwyx1llp30kmqaiyn.png" alt="Automation Diagram" width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1
&lt;/h3&gt;

&lt;p&gt;The first step is to write the blog posts in MDX and push them to GitHub. There are required fields in the metadata for the blog files, including 'pubDatetime'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Automating Astro Builds and Deployments&lt;/span&gt;
&lt;span class="na"&gt;slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;automating-astro&lt;/span&gt;
&lt;span class="na"&gt;pubDatetime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2025-10-03T00:00:00&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A guide on how I have automated the build and deployment of my Astro site using GitHub Actions, Zapier and Cloudflare.&lt;/span&gt; 
&lt;span class="na"&gt;draft&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;development&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;astro&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;automation&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;github&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;cloudflare&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;heroImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;../../assets/images/AstroAutomation.png&lt;/span&gt;
&lt;span class="na"&gt;ogImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;../../assets/images/AstroAutomation.png&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2
&lt;/h3&gt;

&lt;p&gt;Next, there is the GitHub Action that runs daily to add a commit to the site, which Cloudflare looks for. This could potentially be done more efficiently, but for now it works as I need.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Daily Auto Rebuild&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;7&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt; &lt;span class="c1"&gt;# 8 AM UK time (7 AM UTC)&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;trigger-rebuild&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout repo&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create empty commit&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;git config user.name "GitHub Actions"&lt;/span&gt;
          &lt;span class="s"&gt;git config user.email "actions@github.com"&lt;/span&gt;
          &lt;span class="s"&gt;git commit --allow-empty -m "chore: trigger rebuild"&lt;/span&gt;
          &lt;span class="s"&gt;git push&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Wait for site to build&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep &lt;/span&gt;&lt;span class="m"&gt;300&lt;/span&gt; &lt;span class="c1"&gt;# Wait 5 minutes (300 seconds)&lt;/span&gt;


      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Extract latest post URL from RSS&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get_post_url&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;RSS_URL="https://matthewthomas.cloud/rss.xml"&lt;/span&gt;
          &lt;span class="s"&gt;POST_URL=$(curl -s "$RSS_URL" | awk 'BEGIN{RS="&amp;lt;item&amp;gt;"} NR==2 {match($0, /&amp;lt;link&amp;gt;([^&amp;lt;]+)&amp;lt;\/link&amp;gt;/, a); print a[1]}' )&lt;/span&gt;
          &lt;span class="s"&gt;echo "Latest post URL: $POST_URL"&lt;/span&gt;
          &lt;span class="s"&gt;echo "post_url=$POST_URL" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ping LinkedIn Post Inspector&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;curl -s "https://www.linkedin.com/post-inspector/inspect?url=${{ steps.get_post_url.outputs.post_url }}"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see this adds a daily chore commit. Additionally I have added a step to put the blog post URL through the LinkedIn Post Inspector. This step might be redundant, but I found that sometimes LinkedIn would not pick up the OG image. As you can see, this adds a daily chore commit. Additionally, I have added a step to put the blog post URL through the LinkedIn Post Inspector. This step might be redundant, but I found that sometimes LinkedIn would not pick up the OG image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3
&lt;/h3&gt;

&lt;p&gt;The Cloudflare Worker connected to the repository is looking for new commits to then trigger a new build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs017cl522imrnizvf2bn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs017cl522imrnizvf2bn.png" alt="Cloudflare Worker" width="772" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it sees a new commit, it will pull the latest code and then build the site using the following commands:&lt;/p&gt;

&lt;p&gt;Build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx astro build

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx wrangler deploy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4
&lt;/h3&gt;

&lt;p&gt;Next, I have a simple Zapier workflow that monitors my RSS feed every 15 minutes to look for new posts. When found, it automatically adds them to Buffer. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ni7aa74139bfm9y6a2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ni7aa74139bfm9y6a2l.png" alt="Zapier Workflow" width="362" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5
&lt;/h3&gt;

&lt;p&gt;The final automation step is in Buffer, a useful social media management tool. Now, do I really need to use one? Probably not; I was kind of forced to by wanting to automate the LinkedIn posting. I tried to do it via Zapier directly, but the quality of the post created was very poor, unless I paid for subscriptions.&lt;/p&gt;

&lt;p&gt;I found Buffer to be a good compromise. I can automate posts through this workflow, then manage the posts in Buffer to make any changes or updates before they are published. I can also use it to schedule any other posts I may want to share outside of blog posts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6
&lt;/h3&gt;

&lt;p&gt;Lastly, the post gets shared on LinkedIn. This is something I have only started doing with this blog, but I want to share my work and thoughts more widely.&lt;/p&gt;

&lt;p&gt;This is taking some learning, especially with how the LinkedIn algorithm works, but I am starting to get a feel for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Overall, I am really happy with how this setup works for me. I couldn't find any existing solutions online that worked for me, so creating my own was the best option.&lt;/p&gt;

&lt;p&gt;It could probably be simplified in some places, but it hits the key points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free or very low cost&lt;/li&gt;
&lt;li&gt;No hosting required&lt;/li&gt;
&lt;li&gt;Fully automated&lt;/li&gt;
&lt;li&gt;Easy to write and publish blogs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd be interested to hear if this works for anyone else, or if you have any suggestions as to how to improve it.&lt;/p&gt;

</description>
      <category>development</category>
      <category>astro</category>
      <category>automation</category>
      <category>github</category>
    </item>
    <item>
      <title>Sharing Code Across .NET Projects with csproj Includes: An Azure Functions Example</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Thu, 28 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/sharing-code-across-net-projects-with-csproj-includes-an-azure-functions-example-3ln6</link>
      <guid>https://forem.com/mthomas564/sharing-code-across-net-projects-with-csproj-includes-an-azure-functions-example-3ln6</guid>
      <description>&lt;p&gt;The ability to reuse code without duplication is always the aim of any developer following the DRY principle, &lt;em&gt;Don't Repeat Yourself&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Normally as developers we turn these things into a nugget package and install the library that way allowing us to reuse the code. This is great in a lot of circumstances but not always.&lt;/p&gt;

&lt;p&gt;We can do that using compilation links within the csproj file to configure a project to compile files from a different project, well specifically a different path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;ItemGroup&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;Compile&lt;/span&gt; &lt;span class="na"&gt;Include=&lt;/span&gt;&lt;span class="s"&gt;"../Shared/Functions/Function1.cs"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/ItemGroup&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above snippet is how to include a file from another location to be compiled within our application.&lt;/p&gt;

&lt;h2&gt;
  
  
  My use case
&lt;/h2&gt;

&lt;p&gt;Recently I have been working with a development team building an API for a service that will have both and Web UI and a Mobile App calling the API. Due to the two different client types we have implemented two different authentication options for the API which is being provided using policy in APIM. One is app to app, and one is a B2C route for the end users of the mobile app. This has been working great, until we got a requirement for both the web and the mobile app to use the same endpoint of the API. Due to the way the policy works this cannot be implemented without duplicating the endpoint within the same API.&lt;/p&gt;

&lt;p&gt;To do this we chose to split the API into two projects that can then have their own APIM policies enforcing different authentication schemes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;//File: Functions.B2C.csproj
&lt;span class="nt"&gt;&amp;lt;ItemGroup&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;Compile&lt;/span&gt; &lt;span class="na"&gt;Include=&lt;/span&gt;&lt;span class="s"&gt;"../Functions/SendEmailFunction.cs"&lt;/span&gt; &lt;span class="na"&gt;Link=&lt;/span&gt;&lt;span class="s"&gt;"Functions\SendEmailFunction.cs&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;/ItemGroup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example you can see that within our new B2C function, we are including a function from the original Function App.&lt;/p&gt;

&lt;h2&gt;
  
  
  Link Property
&lt;/h2&gt;

&lt;p&gt;The additional property added 'Link' allows us to set the symbolic link location for how it displays within Visual Studio. This is not required but is preferred for cleanliness when you are looking at the project within an IDE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe9ybnk6yxd22ihemyk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe9ybnk6yxd22ihemyk1.png" alt="alt text" width="331" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above the project within Visual Studio now shows our function, even though from a different project, and it's clearly identifiable as a link from the icon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final implementation
&lt;/h2&gt;

&lt;p&gt;The final implementation that we settled on was to even further split this out, ending up with the following projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Functions.common

&lt;ul&gt;
&lt;li&gt;Contains all of the shared code for the functions including the actual function definitions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Functions.API

&lt;ul&gt;
&lt;li&gt;Has no direct code, using compile links to include a subset of functions&lt;/li&gt;
&lt;li&gt;Function a&lt;/li&gt;
&lt;li&gt;Function b&lt;/li&gt;
&lt;li&gt;Function c&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Functions.B2C

&lt;ul&gt;
&lt;li&gt;Has no direct code, using compile links to include a subset of functions&lt;/li&gt;
&lt;li&gt;Function c&lt;/li&gt;
&lt;li&gt;Function d&lt;/li&gt;
&lt;li&gt;Function e&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;As you can see we are now able to share functions between the two hosting applications without the need to duplicate actual code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Developers should always aim to create DRY solutions where at all possible. This functionality helps with creating this especially in more speciliased or specific use cases which are not suited to creating an installable package. Packages are great for instances when the code will be shared to other solutions but in a case were the code is being used within the same solution this often creates more complication than is really required.&lt;/p&gt;

&lt;p&gt;Above we have been able to create a shared set of functions and then deploy two different Function Apps using their own specified subset of the functions creating two individual APIs without having to repeat ourselves.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>development</category>
      <category>codesharing</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>Feature Flags - What and Why</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Tue, 19 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/feature-flags-what-and-why-314e</link>
      <guid>https://forem.com/mthomas564/feature-flags-what-and-why-314e</guid>
      <description>&lt;h2&gt;
  
  
  What are feature flags? Why should you use them?
&lt;/h2&gt;

&lt;p&gt;Firstly, what are they? Feature flags are a method of being able to enable and disable features in an application without having to redeploy your application. They can be used for simple features like displaying content or not, or they can be used for more complex features such as enabling or disabling an AI help chat.&lt;/p&gt;

&lt;p&gt;Secondly, why should you use them? The main reason is being able to disable a feature without having to redeploy your application, which would potentially cause downtime. This could be needing to disable it because it doesn't work, or because you weren't ready to release it. This last part is incredibly beneficial in my opinion. If you are working in an agile way you want to be completing PRs as soon as they are done, and that means code is into the main branch. But what if that feature isn't fully ready or tested, well without feature flags you'd be stuck not able to release from that branch. Using flagging you can just disable that feature and keep up a steady release cycle.&lt;/p&gt;

&lt;p&gt;For example, I recently implemented feature flagging in a Blazor project I was helping with. We needed to be able to publish to all of our environments, critically production, but in production we could only show a holding page for the time being. This was previously implemented in a separate branch and the deployment pipeline run from that branch. Instead, we now have a simple feature flag that changes what pages are rendered to enable this control.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to implement feature flags
&lt;/h2&gt;

&lt;p&gt;There are several ways to implement feature flags, from simple boolean checks with config values to using a third-party service. The way you implement depends on a number of things including the team, infrastructure, and the budget.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Boolean Checks
&lt;/h2&gt;

&lt;p&gt;The simplest way to implement feature flags is to use a boolean check in your code. This can be done by using a configuration file or environment variable to store the flag state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Configuration&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"FeatureFlags:NewFeature"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Execute new feature code&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Execute old feature code&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty simple, right? And this can be repeated anywhere, especially if you use dependency injection to inject the configuration into the rest of the codebase. It does have its drawbacks though. Firstly, you would have to have access to the hosting environment to change the configuration value, for example access to the Azure App Service hosting it. Secondly, if you are using Azure App Service, these values are held as environment variables, and changing them causes the application to restart. This means we lose out on the benefit of not having to cause any downtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure App Configuration
&lt;/h2&gt;

&lt;p&gt;Okay, so as an Azure Architect, I am obviously going to have a bias to Azure options and therefore have to mention App Configuration. This is a really well-priced service within Azure which can be used with a number of different SDKs and application types. One of the key benefits is that it can watch for config changes and update without having to restart the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddAzureAppConfiguration&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ConfigureAppConfiguration&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;hostingContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddAzureAppConfiguration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"ConnectionStrings:AppConfig"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
               &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UseFeatureFlags&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
               &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ConfigureRefresh&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;refreshOptions&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;{&lt;/span&gt;
                   &lt;span class="n"&gt;refreshOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FeatureFlags:Sentinel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;refreshAll&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                                 &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SetCacheExpiration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TimeSpan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromSeconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;30&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
               &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Add feature management&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddFeatureManagement&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A really powerful tool which has a built-in feature flag system which can be simply enabled. You can then add a feature gate to the code, for example in a controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Microsoft.AspNetCore.Mvc&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Microsoft.FeatureManagement&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;MyApi.Controllers&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;ApiController&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"api/[controller]"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BetaController&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ControllerBase&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="n"&gt;IFeatureManager&lt;/span&gt; &lt;span class="n"&gt;_featureManager&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;BetaController&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IFeatureManager&lt;/span&gt; &lt;span class="n"&gt;featureManager&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_featureManager&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;featureManager&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;HttpGet&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IActionResult&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_featureManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IsEnabledAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"CoolNewFeature"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Beta feature is ENABLED"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;NotFound&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Beta feature is DISABLED"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in the example, the get method will only respond with an OK if the feature is enabled in App Configuration.&lt;/p&gt;

&lt;p&gt;Using Azure App Configuration is perfect if you are already using Azure services. It allows you to make these configuration changes without having to restart the application. It also allows your platform team to limit the access that people need to be able to change these. Instead of a user having to have contributor rights on the whole App Service, they can just be given rights on the App Configuration. Yes, there is still a risk of malicious activity this way, but the blast radius is smaller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Third Party Services
&lt;/h2&gt;

&lt;p&gt;There are several third-party services that can be used to manage and implement feature flags. Using a more platform-agnostic service can be beneficial, especially if you are multi-cloud, or still hosting on-premises. These services would allow you to use the same management pane and implementation regardless of the hosting environment. They do come with additional costs which would need to be factored into any pricing for a project.&lt;/p&gt;

&lt;p&gt;One popular option is &lt;a href="https://launchdarkly.com/" rel="noopener noreferrer"&gt;LaunchDarkly&lt;/a&gt;. I won't go into detail about how to implement this as it's well documented on their site, for example for .NET &lt;a href="https://docs.launchdarkly.com/sdk/server-side/dotnet" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Something I do like about LaunchDarkly is the really clean and simple UI that is provided for managing the flags. I can see this as being great for being able to delegate the control of the flags to a wider team, such as product owners.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Whilst I have not been in a position personally to use a service such as LaunchDarkly, I can definitely see the benefits of using an independent service like this. You get all of the flexibility but with a wider range of supported environments. Now, that isn't to say you couldn't, for example, use Azure App Configuration with an application hosted on-premises, but would you be paying for Azure if you are hosting on-premises?&lt;/p&gt;

&lt;p&gt;Implementing feature flags is a must-have in my opinion for any project, especially one working in an agile way. No matter how you choose to implement them, they will be beneficial to the application. Yes, there is some additional work upfront to set it up, and you have to make sure they are implemented for each feature needing one, but the benefits far outweigh the costs.&lt;/p&gt;

</description>
      <category>featureflags</category>
      <category>softwaredevelopment</category>
      <category>devops</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Troubleshooting Entra External Identities B2C OTP Emails</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Tue, 12 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/troubleshooting-entra-external-identities-b2c-otp-emails-1mca</link>
      <guid>https://forem.com/mthomas564/troubleshooting-entra-external-identities-b2c-otp-emails-1mca</guid>
      <description>&lt;p&gt;Microsoft have released a new feature to the Identity Platform to allow you to configure &lt;a href="https://learn.microsoft.com/en-us/entra/identity-platform/custom-extension-email-otp-get-started?tabs=azure-communication-services%2Cazure-portal" rel="noopener noreferrer"&gt;Configure a custom email provider for one-time passcode events&lt;/a&gt; within a custom authentication extension for an external tenant.&lt;/p&gt;

&lt;p&gt;At the time when we started to implement this feature, it was still in preview, meaning we had no blogs like this to help. We ended up having to raise a support ticket with Microsoft to assist in getting this resolved.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is not a full guide on setting this up. If you would like one, the Microsoft documentation above does explain all of the steps.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Our setup for this is a workforce tenant with Front Door, APIM and the API, and then an external tenant which is set up with the custom authentication extension. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fken5fnm3vk4q07syv2f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fken5fnm3vk4q07syv2f5.png" alt="Setup Diagram" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Was The Problem?
&lt;/h2&gt;

&lt;p&gt;The error message that we received was the following when trying to test the user sign-in/sign-up flow: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhc2oyevumo6smktzjs6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhc2oyevumo6smktzjs6g.png" alt="Error message" width="462" height="374"&gt;&lt;/a&gt;&lt;code&gt;There was an issue looking up your account. Tap Next to try again.&lt;/code&gt;Honestly, one of the most misleading error messages that left us chasing our tails for quite some time. This error message can and does occur whether the account already existed or not, so nothing to do with not being able to look up the account.&lt;/p&gt;

&lt;p&gt;Several hours of checking all of the configurations in B2C later, I decided to move on and check the other side, the Workforce Tenant.&lt;/p&gt;

&lt;p&gt;From the diagram above, the first thing to check is Azure Front Door (AFD) and the Web Application Firewall (WAF) that is configured alongside that. Let's have a look at the logs: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38m44wlqlix9ve4an5hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38m44wlqlix9ve4an5hx.png" alt="WAF Logs" width="498" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our setup, the WAF has an IP Restriction rule on it to only allow traffic from certain IP addresses, mainly our corporate VPN. This is to help control the access to the development and test environments. The WAF logs are showing that the request is being blocked by the IP filtering rule. Entra has a lot of IPs that the traffic could originate from, so to get around this I updated the rule to not block if the request matched our OTP endpoint.&lt;/p&gt;

&lt;p&gt;From here, another test showed that the request was now being allowed through, but we were still getting the same error message. The next step in our chain was to check API Management (APIM). Looking at the logs, I could see that the API was not returning any errors, but it was not returning the expected response either.&lt;/p&gt;

&lt;p&gt;Digging into this further, I started looking at the code itself, and found the culprit. The code, whilst a completely valid function, was not returning the expected response that Entra needed. The code was just returning a standard 200 OK for success.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;    &lt;span class="n"&gt;ResponseObject&lt;/span&gt; &lt;span class="n"&gt;responseData&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ResponseObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"microsoft.graph.OnOtpSendResponseData"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;responseData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ResponseAction&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ResponseAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s"&gt;"microsoft.graph.OtpSend.continueWithDefaultBehavior"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;OkObjectResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;responseData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the key part from the documentation above that had been missed. It has to return the exact object and actions that the B2C flow expects.&lt;/p&gt;

&lt;p&gt;Fixing that with the development team and a quick redeploy, and we were able to get the flow working as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hopefully, this will help anyone else who is trying to set up the custom email provider for one-time passcode events in Entra B2C. If you do have similar issues, I would recommend checking all the steps in your chain one at a time. If you have WAF rules, for example, check the logs or put the WAF in detection mode so you can see the requests and work from there. Double-check the code against the documentation, and make sure to return the correct object and actions that Entra expects.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>entra</category>
      <category>b2c</category>
      <category>troubleshooting</category>
    </item>
    <item>
      <title>.NET API Versioning in Azure API Management</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Fri, 18 Jul 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/net-api-versioning-in-azure-api-management-83n</link>
      <guid>https://forem.com/mthomas564/net-api-versioning-in-azure-api-management-83n</guid>
      <description>&lt;p&gt;When writing an API in .NET, it's common to be able to have multiple versions of your API. This functionality is built into .NET, allowing you to specify different versions of your API endpoints. Azure API Management (APIM) is a great tool to create an API hub, especially when it comes to managing multiple versions of your API.&lt;/p&gt;

&lt;p&gt;However, when you want to set this up you can run into some issues, especially if you want your API to have versioning in the URL. Whilst you can handle the versioning in the URL, a header or query parameters, in the URL is my preferred option.&lt;/p&gt;

&lt;p&gt;When this is set up in both the API and APIM you can end up with some URLs like the following:&lt;code&gt;https://apim.azure-api.net/api/v1/v1/value&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is pretty poor really, but why does it happen?&lt;/p&gt;

&lt;p&gt;Firstly, the OpenAPI file that has been generated by the API has the version in the path as you would expect as that is how we are managing the version selection. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkphnn0le8zdfnurv6hbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkphnn0le8zdfnurv6hbm.png" alt="alt text" width="325" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then we have the version in APIM, which is required when you enable versioning on the API settings. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf1x15hsekmnog79hqsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf1x15hsekmnog79hqsw.png" alt="alt text" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can this be fixed?&lt;/p&gt;

&lt;h2&gt;
  
  
  Update the API project
&lt;/h2&gt;

&lt;p&gt;To start with we need to make some changes in our .NET project. For reference, the project I am using to demonstrate is .NET 9 and uses the following packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;PackageReference Include="Asp.Versioning.Mvc" Version="8.1.0" /&amp;gt;
&amp;lt;PackageReference Include="Asp.Versioning.Mvc.ApiExplorer" Version="8.1.0" /&amp;gt;
&amp;lt;PackageReference Include="Microsoft.AspNetCore.OpenApi" Version="9.0.7" /&amp;gt;
&amp;lt;PackageReference Include="Microsoft.Extensions.ApiDescription.Server" Version="9.0.7"&amp;gt;
    &amp;lt;IncludeAssets&amp;gt;runtime; build; native; contentfiles; analyzers; buildtransitive&amp;lt;/IncludeAssets&amp;gt;
    &amp;lt;PrivateAssets&amp;gt;all&amp;lt;/PrivateAssets&amp;gt;
&amp;lt;/PackageReference&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have shared this project &lt;a href="https://github.com/MThomas564/APIVersionTest" rel="noopener noreferrer"&gt;here on Github&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Set up the Program.cs to have API Versioning
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services.AddApiVersioning(options =&amp;gt;
{
    options.DefaultApiVersion = new ApiVersion(1);
    options.AssumeDefaultVersionWhenUnspecified = true;
    options.ReportApiVersions = true;
})
.AddMvc()
.AddApiExplorer(options =&amp;gt;
{
    options.GroupNameFormat = "'v'VVV";
    options.SubstituteApiVersionInUrl = false;
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key thing to note here is &lt;code&gt;options.SubstituteApiVersionInUrl = false&lt;/code&gt;. This prevents the version being set in the OpenAPI file, leaving it like this: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk93dmz9627qqzvk1ecp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk93dmz9627qqzvk1ecp6.png" alt="alt text" width="338" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Create an OpenApiDocumentTransformer
&lt;/h3&gt;

&lt;p&gt;A document transformer can be used to customise the output which OpenAPI generates, allowing both the removal of properties but also the addition of custom ones if needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class OpenAPIPathTransform : IOpenApiDocumentTransformer
{
    public Task TransformAsync(OpenApiDocument document, OpenApiDocumentTransformerContext context, CancellationToken cancellationToken)
    {
        var newPaths = new OpenApiPaths();
        foreach (var path in document.Paths)
        {
            // Remove '/api/v{version}' from the path
            var newPath = Regex.Replace(path.Key, @"/api/v\{version\}", "", RegexOptions.IgnoreCase);
            var pathItem = path.Value;

            // Remove 'version' path parameter from all operations if present
            foreach (var operation in pathItem.Operations.Values)
            {
                if (operation.Parameters != null)
                {
                    operation.Parameters = operation.Parameters
                        .Where(p =&amp;gt; !(p.In == ParameterLocation.Path &amp;amp;&amp;amp; p.Name == "version"))
                        .ToList();
                }
            }

            newPaths.Add(newPath, pathItem);
        }

        document.Paths = newPaths;

        // Set the OpenAPI info.version property based on the document name
        if (!string.IsNullOrEmpty(context.DocumentName))
        {
            document.Info.Version = context.DocumentName;
        }
        return Task.CompletedTask;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This has a few parts to it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the API version from the path.&lt;/li&gt;
&lt;li&gt;Remove the version parameter from the operations, otherwise the spec is invalid.&lt;/li&gt;
&lt;li&gt;Set the version property to have the version from the document name that we will set in the Program.cs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3 - Register the OpenAPI Generation and document transformer in the Program.cs
&lt;/h3&gt;

&lt;p&gt;Next we need to register the fact that we want to generate OpenAPI files for the API, and the versions these are for. With that we register the document transformer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services.AddOpenApi("v1", options =&amp;gt;
{
    options.AddDocumentTransformer&amp;lt;OpenAPIPathTransform&amp;gt;();
});
builder.Services.AddOpenApi("v2", options =&amp;gt;
{
    options.AddDocumentTransformer&amp;lt;OpenAPIPathTransform&amp;gt;();
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we set our two versions, and register the document transformer so it updates the spec file to have the properties we want, and, importantly, the path.&lt;/p&gt;

&lt;p&gt;Now our API spec file looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmw9tyqohg8vh1auhi05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmw9tyqohg8vh1auhi05.png" alt="alt text" width="301" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Update APIM
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I am not covering the full steps of setting up an API in APIM here, just the steps important to what I am demonstrating.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Create a new API in APIM
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6pibcamue98qhscwygq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6pibcamue98qhscwygq.png" alt="alt text" width="765" height="749"&gt;&lt;/a&gt;In this example I am creating from an OpenAPI spec file which I have locally created by the project.&lt;/p&gt;

&lt;p&gt;The key things are the settings for this project is the &lt;code&gt;API URL suffix&lt;/code&gt; which is set to &lt;code&gt;api&lt;/code&gt;. And then the enabling of versioning, with the correct identifier and the scheme of path.&lt;/p&gt;

&lt;p&gt;When we look at the test API for one of our endpoints we see a request URL of &lt;code&gt;https://apim.azure-api.net/api/v1/Value&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Perfect, right? Not quite...&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Update the policy
&lt;/h3&gt;

&lt;p&gt;Whilst the request here looks right, the &lt;code&gt;/api/v1&lt;/code&gt; only really relates to the routing within APIM itself. We need to update the backend call to include this too so we get to the right endpoints, and the right version.&lt;/p&gt;

&lt;p&gt;This is the inbound portion of my APIM policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;inbound&amp;gt;
    &amp;lt;base /&amp;gt;
    &amp;lt;set-backend-service backend-id="ap-apimtest" /&amp;gt;
    &amp;lt;rewrite-uri template="@{ return "/api/v1/" + context.Operation.UrlTemplate; }" /&amp;gt;
&amp;lt;/inbound&amp;gt;
    ```



This firstly sets the backend to be pointing to my API backend. And then it rewrites the URL that is used to call the backend to include `/api/v1` as we need. Then it appends the operation we are calling so we get the right thing. 

## That's it!
There are quite a few steps to doing it this way, and there might be some other ways of doing it.
That being said, I have not found any good pieces of documentation for how to do it, specifically for this kind of setup with the versioning in the URL.

Please reach out if you have other ways to do it!


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Function App Vs Web App for APIs</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Tue, 08 Jul 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/function-app-vs-web-app-for-apis-4ffb</link>
      <guid>https://forem.com/mthomas564/function-app-vs-web-app-for-apis-4ffb</guid>
      <description>&lt;p&gt;When it comes to hosting an API in Azure, there are a couple of options. The two main ones are &lt;strong&gt;Azure Function Apps&lt;/strong&gt; and &lt;strong&gt;Azure Web Apps&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When I've investigated comparing the two online before, the results often lead to the conclusion that Function Apps are the way forward; however, I don't think that this is always the case.&lt;/p&gt;

&lt;p&gt;If you want to run a lightweight service that is event-driven, which importantly can include HTTP requests, then yes, a Function App might be a good choice. But what if you want to run a fully featured RESTful API? Then a Function App might not be the best choice.&lt;/p&gt;

&lt;p&gt;I'm currently looking into this with some colleagues around what is the best approach for a new .NET API project. The project is essentially an internal search index which will, for the new iteration, likely be using a SQL database backend.&lt;/p&gt;

&lt;p&gt;Here are its requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be cost-effective.&lt;/li&gt;
&lt;li&gt;Be hosted in Azure.&lt;/li&gt;
&lt;li&gt;Be written in .NET.&lt;/li&gt;
&lt;li&gt;Serve the information over an HTTP endpoint to allow a front end to consume it.&lt;/li&gt;
&lt;li&gt;Be able to have IP restrictions, and it will be presented through Azure API Management.&lt;/li&gt;
&lt;li&gt;Have private connectivity to the SQL database.&lt;/li&gt;
&lt;li&gt;Be scalable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Azure Function Apps are great for lightweight, event-driven tasks and can be cost-effective on a consumption plan.&lt;/li&gt;
&lt;li&gt;Function Apps lack native controller support, making full REST APIs harder to build and maintain.&lt;/li&gt;
&lt;li&gt;Swagger and model binding support in Function Apps is limited and requires workarounds.&lt;/li&gt;
&lt;li&gt;Azure Web Apps support standard MVC controllers, middleware, and seamless Swagger integration.&lt;/li&gt;
&lt;li&gt;Web Apps offer better local development, debugging, and native authentication features.&lt;/li&gt;
&lt;li&gt;Web Apps provide predictable scaling and flexible networking, suitable for complex APIs.&lt;/li&gt;
&lt;li&gt;For fully featured, scalable APIs, Web Apps are generally the better choice; Function Apps suit smaller, event-driven workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So why not just use a Function App?
&lt;/h2&gt;

&lt;p&gt;Okay, so to start with, yes, a Function App can be a lot cheaper, &lt;strong&gt;IF&lt;/strong&gt; you can run on a consumption plan. &lt;em&gt;I don't plan on explaining all of the different SKUs here, but essentially, with a consumption plan you just pay for the execution time you use.&lt;/em&gt; This is a big IF that really depends on the use case and the environment it will be used in.&lt;/p&gt;

&lt;p&gt;During the discussion about a new project, we were having this exact debate: 'Would a consumption plan Function App be a good choice?'. Looking at the metrics of the current system and the general traffic, it was looking good. Flurries of requests during the working hours but not hundreds of requests, and then not used out of hours. &lt;em&gt;Great, so consumption plan?_Well... no. We have to think about what third-party health checks may be used. In this instance, the organisation uses Better Stack which will hit the service every 3 minutes. This means the service will be running nearly all the time—so are we really going to see any benefit of the consumption plan? _No, not really.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to think about the networking side. We needed to have private networking options, including VNET integration, and the original consumption plan did not support this. Luckily, Microsoft have added the new flex consumption plan so that solves that problem.&lt;/p&gt;

&lt;p&gt;We also need to consider the scaling options. Function Apps have event-driven scaling, so if you have a sudden spike in traffic, it can respond quickly to handle the load. Additionally, with the new flex consumption plan you can have always-ready instances to help eliminate the cold starts.&lt;/p&gt;

&lt;p&gt;So Function Apps are starting to sound the better route, so what are the issues with what we're wanting to do?&lt;/p&gt;

&lt;p&gt;Firstly, Functions.&lt;/p&gt;

&lt;p&gt;Even when set as HTTP triggers, they are not a 'normal' API endpoint and therefore we don't get a standard Swagger file generated from the API. There is an old package to add this support but it is in maintenance mode and isn't being updated. There is a version that can be used, but the latest V2.0 is only available as pre-release and therefore may not be usable depending on the environment. As Swagger uses Newtonsoft.Json as opposed to the standard function packages which is System.Text.Json. To enable Swagger, both Newtonsoft.Json and System.Text.Json would have to be added, leading to duplication of attributes on properties.&lt;/p&gt;

&lt;p&gt;Secondly, and arguably a bigger issue, is the lack of the ControllerBase. This prevents you being able to set up controllers within your API. On top of that, the Function middleware does not handle the FromBody attribute—if used, you will simply get a null body. You obviously can develop a workaround and create your own packages to add the required support. This is what my colleagues have done to get the functionality by creating a FunctionBase, JsonBodyAttribute, and a JsonBodyExceptionMiddleware.&lt;/p&gt;

&lt;p&gt;Lastly, without the ability to have proper controllers, you're left with having to have ever-growing function files to have each endpoint. Where each endpoint that is required for our API becomes its own function. This is overly cumbersome and difficult to manage. We would also end up with a lot of repeated code for the set-up of each function, meaning it does not follow the DRY principle—one of the core principles of software development.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Honestly, can anyone say that this is a great idea long term?&lt;/em&gt; Personally, I don't think it is. You are creating a heavy reliance on these packages that you as an organisation or team are maintaining. Increasing your maintenance overhead and likely creating technical debt, that all developers know deep down will never have the time to fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Well, what about a Web App instead? (my preferred option)
&lt;/h2&gt;

&lt;p&gt;We can get the same networking options with VNET integration and private endpoints—as long as we use the right App Service Plan.&lt;/p&gt;

&lt;p&gt;We also get the scaling options, metric-driven rather than event-driven. They do work as we would need them to, but it is generally slower to scale than event-driven. From personal experience it works fine—yes, not as fast, but it is still quick enough. Tip: You'll likely want to set the rules slightly lower to scale potentially earlier than required, but it helps avoid the potential delay.&lt;/p&gt;

&lt;p&gt;My favourite benefit is the ability to develop the API in a more standard, and arguably more maintainable way. This allows us to develop an arguably more appropriate RESTful API. We can use standard controllers to handle requests, which also enables automatic Swagger generation. We can follow the best practices, using attributes and middleware as well as being able to follow standard MVC patterns. &lt;em&gt;Great—no workarounds required!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Local development is easier for developing a controller-based API. We get proper native support within IDEs such as Visual Studio or Rider. We can run the API locally with functionality like hot reload and without the need for workarounds for Function dependencies.&lt;/p&gt;

&lt;p&gt;We also have the added flexibility to easily deploy the system to another hosting provider, be that on-premises or another cloud provider. Debugging is also easier and more intuitive as we can get more detailed stack traces and logging.&lt;/p&gt;

&lt;p&gt;Whilst we can build middleware for hosting on a Function App, a lot of this is ready to go with a standard web API. Things such as exception handling and properly bubbling up errors for an application layer are easier to handle.&lt;/p&gt;

&lt;p&gt;We have better ability to add authentication and authorisation to the API. For example, we can use the native &lt;code&gt;[authorize]&lt;/code&gt; attribute to protect endpoints. We can't do this on a Function App—we could have used &lt;code&gt;AuthorizationLevel&lt;/code&gt; but this does not give us the same functionality as on a web app. In our web app we have native support for authentication such as OAuth2, OpenID Connect and JWT Bearer tokens. This includes support for multiple authentication providers, primarily for us the use of Azure Entra.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what does the cost really look like?
&lt;/h2&gt;

&lt;p&gt;Any of the costs suggested are based on the basic settings on the pricing calculator and are correct as of the day of writing. Based on a Windows-hosted application, premium functionality and currently ignoring the new flex consumption plan we have...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Function App: EP1 SKU - £122.89&lt;/li&gt;
&lt;li&gt;App Service: P0V3 SKU - £88.50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Okay, so on the face of it the App Service is cheaper, right? Then you've got to think about the flex consumption plan. The pricing here is not yet on the calculator, and is honestly a bit harder to calculate. Firstly: "Flex consumption plan pricing includes a monthly free grant of 250,000 executions and 100,000 GB-s of resource consumption per month per subscription in pay-as-you-go on-demand pricing across all function apps in that subscription." &lt;a href="https://azure.microsoft.com/en-gb/pricing/details/functions/" rel="noopener noreferrer"&gt;Azure Functions Pricing&lt;/a&gt;So you'd get a reasonable amount of free hosting; you do, however, have to take into account the storage account needed for hosting the function as this is not included. You would also need to consider that if you have any always-ready instances these will use up the free quota.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to choose?
&lt;/h2&gt;

&lt;p&gt;You can probably tell my preference, but when it comes to choosing between the two options you have to consider a few things and, realistically, a choice and standard should be set across either a development team or the organisation. It is not a good idea to have a mix of the two within a team—but I'll talk about that another day.&lt;/p&gt;

&lt;p&gt;Nine times out of ten, if you are building a fully fledged API then going for a Web App and a standard RESTful .NET API is the way to go. It provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Well-structured and maintainable code.&lt;/li&gt;
&lt;li&gt;Great implementation of controllers with middleware and Swagger support.&lt;/li&gt;
&lt;li&gt;Simpler local development workflow.&lt;/li&gt;
&lt;li&gt;More portable and flexible codebase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Function Apps do have their place; they are great for lightweight, event-driven implementations. This could include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Timer or event-triggered tasks.&lt;/li&gt;
&lt;li&gt;Background processing tasks.&lt;/li&gt;
&lt;li&gt;Webhooks or lightweight API endpoints.&lt;/li&gt;
&lt;li&gt;Offloading asynchronous tasks from a main application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So back to the debate: 'Can we build entire APIs in a Function App?' Yes. Should we? Realistically, no. If your team or organisation has a current standard of Function Apps for APIs then it is likely worth a discussion like we are having.&lt;/p&gt;

&lt;p&gt;A possible suggestion of how to choose from a biased viewpoint: Your mileage may vary, and every use case can be different. I know I would always recommend a Web App for a full API, but you should always consider the specific requirements of the project. Function Apps may be lightweight and may seem 'cheaper', but if you are looking to build a fully featured API or trying to future-proof, then a Web App is likely the better choice.&lt;/p&gt;

&lt;p&gt;I'd be interested to hear your thoughts, agree or disagree? Feel free to reach out on LinkedIn and let me know!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Versioned .NET APIs with Swagger UI</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/versioned-net-apis-with-swagger-ui-2m09</link>
      <guid>https://forem.com/mthomas564/versioned-net-apis-with-swagger-ui-2m09</guid>
      <description>&lt;p&gt;Creating a follow up to my blog on &lt;a href="https://dev.to/blog/dotnet-api-versioning-apim"&gt;.NET API Versioning in Azure API Management&lt;/a&gt;, I wanted to cover how to set up Swagger UI for versioned APIs in .NET.&lt;/p&gt;

&lt;p&gt;When developing an API and working locally, being able to have an interface to interact with the API is incredibly useful. There are quite a few options that can be used, but &lt;a href="https://swagger.io/tools/swagger-ui/" rel="noopener noreferrer"&gt;Swagger UI&lt;/a&gt; is one of the most popular.&lt;/p&gt;

&lt;p&gt;If you set up versioning on the API without trimming and editing paths, then it's very likely to just work as it should. However, if you've followed the steps in my previous post, you've got customised OpenAPI spec files which don't really follow the standard Swagger expectations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Swagger UI with Custom Versioning
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 - Install Swagger UI
&lt;/h3&gt;

&lt;p&gt;The first thing to do is to install Swagger UI in the project. The default way to do this would be to install the whole Swashbuckle package, however as we are using the Microsoft.AspNetCore.OpenApi packages, having both prevents the spec files being generated.&lt;/p&gt;

&lt;p&gt;Instead install just the Swagger UI Package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet add package Microsoft.AspNetCore.OpenApi.SwaggerUI

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 - Enable Swagger UI in Program.cs
&lt;/h3&gt;

&lt;p&gt;Now we have installed the package, it needs to configured in the &lt;code&gt;Program.cs&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IsDevelopment&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapOpenApi&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UseSwaggerUI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SwaggerEndpoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/openapi/v1.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"API Version 1"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SwaggerEndpoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/openapi/v2.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"API Version 2"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RoutePrefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Empty&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Serve Swagger at the app's root&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing to note here is making sure this is wrapped in the &lt;code&gt;if (app.Environment.IsDevelopment())&lt;/code&gt; block. This is to make sure we don't expose the Swagger UI or OpenAPI specs when the application is running in production.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;app.MapOpenApi()&lt;/code&gt; call is used to map the OpenAPI endpoints, making sure they are available for the Swagger UI to consume.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;app.UseSwaggerUI()&lt;/code&gt; call is where we configure the Swagger UI. The &lt;code&gt;options.SwaggerEndpoint()&lt;/code&gt; method is used to specify the OpenAPI spec files that the Swagger UI will use. In this case, it is specified twice as there are two versions of the API, for each new version a new line will need to be added.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;options.RoutePrefix = string.Empty;&lt;/code&gt; line is used to serve the Swagger UI at the root of the application, this is also often set to &lt;code&gt;/swagger&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Test Swagger UI
&lt;/h3&gt;

&lt;p&gt;Now that we've enabled Swagger UI, we should be able to run the application and see our versions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb5iiluqnuwo969oqrwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb5iiluqnuwo969oqrwx.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfect, the versions are there as we expected.&lt;/p&gt;

&lt;p&gt;The problem is that when we try to test the API, we get a 404 because the UI is trying to access the API at the wrong path. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjs802vurzrfnjbg09p7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjs802vurzrfnjbg09p7.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 - Update Swagger UI to Use Correct Paths
&lt;/h3&gt;

&lt;p&gt;To fix this, we need to update our document transformer to set the API paths correctly for the server paths. This then enables the Swagger UI to have the correct server path and be able to call the API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Servers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Clear&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Servers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;OpenApiServer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;$"/api/&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DocumentName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This clears the servers set by default and overrides them with corrected ones to fit the version paths. This is done based on the document name, which in the example project is the version, e.g. &lt;code&gt;v1&lt;/code&gt; or &lt;code&gt;v2&lt;/code&gt;. To see the full file, check out the &lt;a href="https://github.com/MThomas564/APIVersionTest/blob/main/OpenAPIPathTransform.cs" rel="noopener noreferrer"&gt;OpenAPIPathTransform.cs&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5 - Test the Swagger UI Again
&lt;/h3&gt;

&lt;p&gt;Now that we've updated the Swagger UI to use the correct paths, we can test it again. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9fhr0ya9oqru6h1e702.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9fhr0ya9oqru6h1e702.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0489n4vb8f84tltyn8v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0489n4vb8f84tltyn8v0.png" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As simple as that, we have a functional Swagger UI that supports our versioned API.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>development</category>
      <category>api</category>
      <category>apim</category>
    </item>
    <item>
      <title>Writing A Code Review</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Fri, 20 Jun 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/writing-a-code-review-40fc</link>
      <guid>https://forem.com/mthomas564/writing-a-code-review-40fc</guid>
      <description>&lt;p&gt;Writing a code review for a large project was something that I had never done before and was quite a scary task to take on for the first time. They are a great thing to have done for a project, and help to maintain high standards and well developed solutions.&lt;/p&gt;

&lt;p&gt;Having just left a development team and not having had any input into the codebase for the project I was the ideal candidate to write the code review. But.. I'd never written one before and didn't really know where to start. The first thing I did as anyone would do is Google it. What did I find? Honestly, not very much for what I wanted and needed to review. Trying to follow the idea of open-learning I decided it was time to write up what you could call my opinion on how to write a code review. It isn't going to be a perfect fit for every requirement but hopefully it helps fill a gap that I found when trying to find out how to do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;If you need to write a code review for a large project and don't know where to start:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand the project and tech stack first.&lt;/li&gt;
&lt;li&gt;Structure your report clearly: Introduction, Methodology, Findings, and Recommendations.&lt;/li&gt;
&lt;li&gt;Review repositories for structure, secrets, pipelines, and documentation.&lt;/li&gt;
&lt;li&gt;Check code quality: readability, maintainability, error handling, and standards.&lt;/li&gt;
&lt;li&gt;Pay attention to security for both frontend and backend.&lt;/li&gt;
&lt;li&gt;Review testing strategy and coverage.&lt;/li&gt;
&lt;li&gt;Make actionable recommendations, prioritised by severity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understand the Project
&lt;/h2&gt;

&lt;p&gt;Firstly you need to understand the project, what is it and why? What is it for, what requirements are there for this project to need to be built. Why is it being built? Understanding this context will help you understand some of the decisions which have been made.&lt;br&gt;&lt;br&gt;
You need to understand the technology that has been used, how has it been architected and are there are design patterns that have been followed. Knowing this will help you review the code and project against the right best practices and standards.&lt;/p&gt;

&lt;p&gt;The project that I was reviewing was a healthcase web portal that consisted of a frontend built in Angular, a dotnet backed API and a SQL Server database. I can't really share the detail about what it was as I don't want to give anything away! But essentially it would allow a certain set of health care professionals see records relating to a set of patients.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Structure the Report
&lt;/h2&gt;

&lt;p&gt;After carrying out the review you need to be able to present you findings in a concise way, otherwise what is the point of doing the review? If you can't correctly communicate what you have found then they will never be fixed! This is the structure that I followed for the report I wrote:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Introduction&lt;/strong&gt; : Give a brief description of the project and the scope of the review you are writing. The introduction should also include a summary of the recommendations and findings. This provides a quick and easy context into the report.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Methodology&lt;/strong&gt; : Describe the approach taken to review the code, including any tools or techniques used. I broke this down into each repository that I was reviewing and what each one would be reviewed for.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Findings&lt;/strong&gt; : Present the findings of the code review, including any issues or concerns identified. This is the main body of the report and should be structured in a way that makes it easy to follow. I broke this down into sections for each repository and then further into sections for each area of the codebase. This was further broken down into each repository as per section 2.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recommendations&lt;/strong&gt; : Provide recommendations for addressing the issues identified in the findings. This should be presented in a way that would allow the audience of the report to understand what you found, why it's a problem and what to do about it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to do the Review
&lt;/h2&gt;

&lt;p&gt;Now we know how to structure the report, how do we actually carry out the review? This is the part that I found some of the least information about for the overal part of it. Of course there are some very clear areas that are well documented, but some can be so opinion based that what you find online doesn't really help.&lt;/p&gt;

&lt;p&gt;As this project had three repositories, some things that I did were the same for all three, some however are specific to a given repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Repository
&lt;/h3&gt;

&lt;p&gt;The first thing I did which was the same for all repositories was to review the repository itself for a few elements. During this review I made sure to look at the frameworks being used before anything else. Are they the latest version? For example, in this instance the API was using .Net 8.0, which whilst was still in long-term support at the time, .Net 9.0 was released and the latest version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Files and Folders
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Structure&lt;/strong&gt; : Does the repository have a clear and logic layout? If you can't find files in a repository it's not much use. Equally are files well-named with descriptive names which don't lead to you having to open each file to find out what it is.

&lt;ul&gt;
&lt;li&gt;For example, I found a few files for Azure DevOps piplines that were named &lt;code&gt;azure-pipelines.yml&lt;/code&gt; which is OK, however if you have multiple named the same with &lt;code&gt;-1&lt;/code&gt;, &lt;code&gt;-2&lt;/code&gt;, you have no idea what they are!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Documentation&lt;/strong&gt; : Is there a README file that provides an overview of the project and how to get started? This file can also serve as a guide to the repository, especially if it is following a specific template.

&lt;ul&gt;
&lt;li&gt;For example, every repository in this project had the default README for either the framework or in one case the standard Azure DevOps template.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Sensitive Data
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secrets&lt;/strong&gt; : Have any secrets, keys or passwords been added to the repository? If they have been checked in, they should be removed and they then need to be changed at source.

&lt;ul&gt;
&lt;li&gt;For example, I found that some passwords had been checked in to one of the repositories within a script.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Configuration&lt;/strong&gt; : Are there any configuration files that contain sensitive information? They shouldn't be stored in this way and should be stored in a secure location.

&lt;ul&gt;
&lt;li&gt;Has an &lt;code&gt;appsettings.json&lt;/code&gt; file been checked in with the real values rather than just the template? This is a common thing throughout dev teams which can easily lead to compromises.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pipelines
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Correct Pipelines&lt;/strong&gt; : Does the repository have the right pipelines for the project to cover both build and deployment? Are they correctly configured with approval checks?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pipeline Tasks&lt;/strong&gt; : Do the pipelines complete the right tasks for their purpose?

&lt;ul&gt;
&lt;li&gt;It is common to run tests in a build pipeline that may run say on a pull request. However, are they running on the deployment pipeline? Personally I prefer that they do to have a final check for the publish.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Code quality
&lt;/h3&gt;

&lt;p&gt;The next step is to review the code itself. How well is it written? Can it be easily understood? Is it maintainable? Are there any issues with the code that need to be addressed? Is it documented?&lt;/p&gt;

&lt;h4&gt;
  
  
  Readability
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clarity&lt;/strong&gt; : Is the code easy to read and understand? Are variable and function names descriptive and meaningful?

&lt;ul&gt;
&lt;li&gt;Have you got &lt;code&gt;var a = 1&lt;/code&gt; or are they more descriptive like &lt;code&gt;var age = 1&lt;/code&gt;?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Comments&lt;/strong&gt; : Are there comments in the code that explain what the code is doing? Or are there too many comments?

&lt;ul&gt;
&lt;li&gt;Arguably comments should be required sparingly. Are you reading the code wishing there were more comments to explain it? Or are you seeing too many comments having to describe specific variables that could be renamed?&lt;/li&gt;
&lt;li&gt;For example, some of the classes in one repository did not have all of the right summary tags. On the flip side, another had a class full of comments for each variable which in my opinion needed looking at as to why it was needed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Formatting&lt;/strong&gt; : Is the code formatted consistently and cleanly?

&lt;ul&gt;
&lt;li&gt;Are there some standards that this could be revied against? The way code is laid out can be extremely personal, but ideally a team should have a standard approach and there should be no roguely formatted files.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Maintainability
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modularity&lt;/strong&gt; : Is the code modular and easy to maintain? Are there any areas of the code that are tightly coupled and difficult to change?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reusability&lt;/strong&gt; : Are there any areas of the code that can be reused? Follow DRY! (Don't Repeat Yourself)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Commit Messages&lt;/strong&gt; : Are the commit messages clear and describe what has been changed?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pull Requests&lt;/strong&gt; : Are the pull requests well written and describe the changes that are being proposed?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Adherence to Standards
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Coding Standards&lt;/strong&gt; : Does the project follow the coding standards that have been set by the team or the best practice for that technology?

&lt;ul&gt;
&lt;li&gt;If the team have templates and standards they use, check this project against them. If they have deviated flag it up and discuss why.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Design Patterns&lt;/strong&gt; : In some industries there are certain patterns and designs that have to be followed. Have they been followed? If not, why?

&lt;ul&gt;
&lt;li&gt;Within the public sector in the UK there is a gov.uk design system that is to be followed for any public gov website. This helps with consistency across all public websites.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Error Handling
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling&lt;/strong&gt; : Is error handling implemented correctly? Are the right errors being caught and thrown? Do they get translated properly for both applications to understand but if you are presenting them to a user, is it in a user friendly way?

&lt;ul&gt;
&lt;li&gt;Something that I found on the API side was that the application layer would throw a not found if it couldn't find an entity by ID, however no other correct error types were thrown for other paths.&lt;/li&gt;
&lt;li&gt;Is the API returning the correct code and status for the thrown error? A mapper can be used to handle this for you.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Logging&lt;/strong&gt; : Does the application correctly log what is happening and why? Are logs appropriately sanitised?

&lt;ul&gt;
&lt;li&gt;Something that is great to include in logging is a correlation ID. This can be used to trace a whole transaction and see what happened when and why.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Retry logic&lt;/strong&gt; : Is retry logic implemented when relying on another system such as a backend API?

&lt;ul&gt;
&lt;li&gt;APIs can sometimes fail, or have a cold start. Retrying them after a short delay can help to ensure that the user still gets the information required rather than being met with an error or no data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Security is an important aspect of any code review. Are there any security issues that need to be addressed? Are there any areas of the code that could be improved from a security perspective? At this point some the areas to be checked vary depending onthe technology stack being used, and you must be more specific depending on the repository. Some areas are of course required regardless and should be treated as such.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication and Authorisation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt; : Has authentication been implemented and is it secure? Does every point of the application that needs authentication have it?

&lt;ul&gt;
&lt;li&gt;There are lots of different authentication methods available. Most nowadays use OAuth2 or OpenID Connect. Using one of these helps to follow good standards and best practices.&lt;/li&gt;
&lt;li&gt;Have the team rolled their own authentication? Yes - I have seen this, and honestly some templates still have it in. Why? It's 2025, there are so many great options for this.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Authorisation&lt;/strong&gt; : Have the right roles and permissions been created in the application and are the right levels of authorisation required for each stage?

&lt;ul&gt;
&lt;li&gt;Is the data returned trimmed to just the organisation a user belongs to?&lt;/li&gt;
&lt;li&gt;Has the principle of least privilege been followed? A user should have the bare minimum of access to be able to do what they need to do.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backend API
&lt;/h3&gt;

&lt;p&gt;These are the specific areas that I looked at for the backend API repository.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Validation&lt;/strong&gt; : Is the data that is passed to the API validated and checked to be in the correct format?

&lt;ul&gt;
&lt;li&gt;The given API I was reviewing had little input due to the nature of the application but even the simple logging endpoint still had validation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;SQL Injection&lt;/strong&gt; : Are there any areas of the code that are vulnerable to SQL injection attacks?

&lt;ul&gt;
&lt;li&gt;The project I reviewed used Entity Framework, so this was not an issue, but it is still worth checking.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cross-Site Request Forgery (CSRF)&lt;/strong&gt;: Are there any areas of the code that are vulnerable to CSRF attacks? Are anti-CSRF tokens being used where required? These are required on any endpoints which update in the backend such as POST, PUT and DELETE.

&lt;ul&gt;
&lt;li&gt;The applicaton I reviewed didn't have this in place. There were a small number of endpoints that could be affected by this, just logs, but still should be implemented.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;p&gt;These are the specific areas that I looked at for the frontend repository.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input validation&lt;/strong&gt; : Are all inputs validated to ensure that they are in the correct format and exist if required? Are input fields set to the correct types?

&lt;ul&gt;
&lt;li&gt;Whilst this application had minimal data input to the backend, there were still search fields and other inputs on the UI that should be validated.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cross-Site Scripting (XSS)&lt;/strong&gt;: The project I reviewed used Angular, so this was not as much of an issue as there are built in protections against XSS attacks. It should always be reviewed that the correct protections are in place.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cross-Site Request Foregery (CSRF)&lt;/strong&gt;: Yes, this is a duplicate from the backend API side, it should be checked here too. Has one side implemented it but not the other?&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Content Security Policy (CSP)&lt;/strong&gt;: Is a CSP implemented? This is a security feature that helps prevent XSS attacks by restricting the sources of content that can be loaded by the browser. A good tool for check this is &lt;a href="https://csp-evaluator.withgoogle.com/" rel="noopener noreferrer"&gt;CSP Evaluator&lt;/a&gt;.

&lt;ul&gt;
&lt;li&gt;Just because there is a CSP setup in the code, doesn't mean it works properly or is even published. Always use a tool to check this.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Robots.txt accessibility&lt;/strong&gt; : Is the robots.txt file accessible? This is a file that tells search engines which pages should not be indexed. It should not contain any sensitive information and should be accessible to search engines.

&lt;ul&gt;
&lt;li&gt;In this instance, there was a robots.txt file in the repository. It wasn't however accessible on the web server.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;The next key element that I reviewed was the testing of the code in each repository. Are there tests in place? Are they well written and easy to understand? Do they cover all areas of the code? Are they run as part of the pipeline?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For the backed API I was looking for both unit tests and integration tests. Unit tests are used to test individual components of the code, while integration tests are used to test how the components work together.&lt;/li&gt;
&lt;li&gt;For the frontend I was looking for unit tests and end-to-end tests. Unit tests are used to test individual components of the code, while end-to-end tests are used to test the entire application from the user's perspective.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Test coverage is also something that I looked for. The project was utilising Azure DevOps for the build pipelines, with an additional tool called Coverlet to provide code coverage reports. This tool is fantastic for both developers and reviewers. It generates easy to read reports that show you test coverage broken down by both file and the class. Back when I ran a dev team I would always push for a 80% test coverage. Can this always be achieved? No, but having a target helps to ensure that tests are written. Sometimes it just doesn't happen though, not every path gets tested.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendations
&lt;/h2&gt;

&lt;p&gt;The final section of the report was the recommendations. This is where I provided recommendations to resolve each of the findings. I broke this down into a section for each repository and made sure that each recommendation was numbered so it could easily be reference.&lt;/p&gt;

&lt;p&gt;To help the team prioritise the recommendations to resolve I scored them against a simple scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Critical&lt;/strong&gt; : These should be dealt with as soon as possible. They could pose a serious risk to the project.

&lt;ul&gt;
&lt;li&gt;For example, sensitive data in a repository. Someone who shouldn't have a password, now has it. The password needs to be changed and deleted in the repository. Make sure gitignores are in place to help prevent it happening again.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;High&lt;/strong&gt; : These should be dealt with next, the could cause you a problem.

&lt;ul&gt;
&lt;li&gt;For example, a lack of documentation for the codebase, making it harder to maintain. Does it matter instantly? Probably not, but the moment you have a new developer or one leave the team it will.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Medium&lt;/strong&gt; : These are issues that should be addressed, but are not critical to the project.

&lt;ul&gt;
&lt;li&gt;For example, poor git standards such as commit messages or pull request descriptions that do not follow best practices.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Low&lt;/strong&gt; : These are issues that could be addressed, but have low impact to the project.

&lt;ul&gt;
&lt;li&gt;For example, minor updates to pipelines to properly name tasks, a nice to have but not essential.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Writing a code review was not a quick job, it took me a few days once I got into it, and honestly it took me a while to get started. I think the key issue was just not knowing where to start and I hope I've helped you with that if you've read this far!&lt;/p&gt;

&lt;p&gt;Realistically you are going to be reading code that you have never read or written, and that's the point. Don't be worried if it's taking time, there is a reason you have been chosen to do it. Take your time and break it into reasonable chunks. As a friend always said to me, "Take small bites of the elephant, you can't do it all at once".&lt;/p&gt;

&lt;p&gt;My last opinion is, be consistent and thorough. Take your time and remember the point of doing it is to help build great software in a great way.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why Write a Blog?</title>
      <dc:creator>Matthew Thomas</dc:creator>
      <pubDate>Mon, 16 Jun 2025 00:00:00 +0000</pubDate>
      <link>https://forem.com/mthomas564/why-write-a-blog-5gc1</link>
      <guid>https://forem.com/mthomas564/why-write-a-blog-5gc1</guid>
      <description>&lt;p&gt;Why did I start a blog?&lt;/p&gt;

&lt;p&gt;Well, to start with everyone seems to do it and I felt that I was missing out. But there is more to it than that, I wanted to create a space where I could share my thoughts, experiences, and insights on cloud architecture and software development.&lt;/p&gt;

&lt;p&gt;On top of that I wanted to create a digital portfolio that I could use to showcase my projects and contributions to the tech community. I also wanted to give back to the developer and cloud communities by sharing my knowledge and experiences.&lt;/p&gt;

&lt;p&gt;A lot of the time we in the technical industry spend so much time working on projects it's really easy to forget to take a step back and look at what we have achieved. Whether that is completing a large scale project, or just solving that tricy problem that had you headbutting the desk for a day. Writing a blob allows me, and honestly forces me to take the step back and reflect.&lt;/p&gt;

&lt;p&gt;Sometimes it might just be a short post to try and document an issue that has driven me round the bend for a day or two, or it might be a longer post that goes into more detail about a project I have been working on. Either way, I hope that by sharing my experiences and insights, I can help others in the community.&lt;/p&gt;

</description>
      <category>career</category>
      <category>community</category>
      <category>devjournal</category>
      <category>writing</category>
    </item>
  </channel>
</rss>
