<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Peter Morlion</title>
    <description>The latest articles on Forem by Peter Morlion (@petermorlion).</description>
    <link>https://forem.com/petermorlion</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/petermorlion"/>
    <language>en</language>
    <item>
      <title>Adding A Mock Integration AWS API Gateway with Serverless</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Wed, 27 Apr 2022 10:39:42 +0000</pubDate>
      <link>https://forem.com/petermorlion/adding-a-mock-integration-aws-api-gateway-with-serverless-589g</link>
      <guid>https://forem.com/petermorlion/adding-a-mock-integration-aws-api-gateway-with-serverless-589g</guid>
      <description>&lt;p&gt;A curious use-case came up to me this week. We have a REST API in AWS API Gateway that integrates with a Lambda. This is set up using &lt;a href="https://www.serverless.com/"&gt;Serverless&lt;/a&gt;. This is a multi-tenant system and because a former client didn’t do their cleanup, we’re still receiving a lot of calls that basically return errors (because the tenant no longer exists on our side). In AWS Lambda, this means a lot of useless invocations and a higher bill at the end of the month. Here’s a way a returning the error in API Gateway, before your Lambda is invoked. In other words, let’s have the AWS infrastructure handle this for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Resources
&lt;/h2&gt;

&lt;p&gt;This screenshot of the AWS Console clarifies what we want to end up with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hrO-dFrz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.petermorlion.com/wp-content/uploads/2022/04/API-Gateway-goal-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hrO-dFrz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.petermorlion.com/wp-content/uploads/2022/04/API-Gateway-goal-1.png" alt="" width="219" height="166"&gt;&lt;/a&gt;Let’s break this down. Previously, we only had the &lt;code&gt;/{proxy+}&lt;/code&gt; resource. This means that the path is sent to our lambda and the lambda would handle the routing. You could choose to put your resources in API Gateway instead of using what is called Proxy Integration, but that’s not really the issue here (and also just a matter of preference in my opinion).When a request is made to &lt;code&gt;https://example.com/goodtenant/whatever&lt;/code&gt;, we want to invoke our Lambda and return the response. But if our bad tenant makes a request to &lt;code&gt;https://example.com/badtenant/whatever&lt;/code&gt;, we want to stop them short and immediately return an error.&lt;/p&gt;

&lt;p&gt;So in the screenshot above, any request to &lt;code&gt;/badtenant&lt;/code&gt; or a sub-path of that should return an error.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Integration
&lt;/h2&gt;

&lt;p&gt;Now, we also need this &lt;code&gt;badtenant&lt;/code&gt; resource to point to a mock integration and return (in our case) a HTTP 403. This is what it should look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P1c5qc8s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.petermorlion.com/wp-content/uploads/2022/04/API-Gateway-integration-1024x391.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P1c5qc8s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.petermorlion.com/wp-content/uploads/2022/04/API-Gateway-integration-1024x391.png" alt="" width="880" height="336"&gt;&lt;/a&gt;We need an integration of type “MOCK” and have it return a HTTP 403 for any result that the mock endpoint returns.In a mock endpoint, you can put some basic logic to return a response. Look at it like a mini-lambda. In our case, it doesn’t really matter what we return, because we’ll return a 403 in the end anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Serverless File
&lt;/h2&gt;

&lt;p&gt;Now for the most important part, this is what you serverless.yml should look like (omitting the uninteresting parts):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;functions:
  main:
    handler: lib/index.handler
    name: myLambda
    events:
      - http: POST /{proxy+}

...

resources:
  Resources:
    ApiGatewayResourceBadTenant:
     Type: AWS::ApiGateway::Resource
     Properties:
       ParentId:
         Fn::GetAtt: ["ApiGatewayRestApi", "RootResourceId"]
       PathPart: "badtenant"
       RestApiId:
         Ref: ApiGatewayRestApi
    ApiGatewayResourceBadTenantProxyVar:
      Type: AWS::ApiGateway::Resource
      Properties:
        ParentId:
          Ref: ApiGatewayResourceBadTenant
        PathPart: "{proxy+}"
        RestApiId:
          Ref: ApiGatewayRestApi
    ApiGatewayResourceeBadTenantProxyVarAny:
      Type: AWS::ApiGateway::Method
      Properties:
        HttpMethod: ANY
        ResourceId:
          Ref: ApiGatewayResourceBadTenantProxyVar
        RestApiId:
          Ref: ApiGatewayRestApi
        Integration:
          Type: MOCK
          PassthroughBehavior: NEVER
          RequestTemplates:
            application/json: "{\"statusCode\":403}"
          IntegrationResponses:
            - SelectionPattern: .*
              StatusCode: 403
        MethodResponses:
          - StatusCode: 403
        AuthorizationType: NONE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In essence, what this does is define a new resource called &lt;code&gt;ApiGatewayResourceBadTenant&lt;/code&gt;. The parent (&lt;code&gt;ApiGatewayRestApi&lt;/code&gt;) is created by the Serverless framework and is always called &lt;code&gt;ApiGatewayRestApi&lt;/code&gt;. Under the resource, we create the proxy resource. And under that, we create the method.&lt;/p&gt;

&lt;p&gt;The method contains a &lt;code&gt;MOCK&lt;/code&gt; integration and returns a 403.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;After deploying this with serverless, the necessary resources should be created. If we make a call to our bad tenant, we see the 403:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3SFWo1-B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.petermorlion.com/wp-content/uploads/2022/04/Postman.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3SFWo1-B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.petermorlion.com/wp-content/uploads/2022/04/Postman.png" alt="" width="754" height="332"&gt;&lt;/a&gt;This is pure API Gateway and not a single of our Lambda’s was invoked.&lt;/p&gt;

&lt;h2&gt;
  
  
  That’s It!
&lt;/h2&gt;

&lt;p&gt;It took me a while to put the pieces together, but in the end it’s a fairly simple and elegant solution. We don’t need to do anything in the AWS console manually, which means everything is automated and remains in source control. But we can still stop the bad tenant from triggering our Lambda’s too much, which is good for our Amazon bill!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>uncategorized</category>
    </item>
    <item>
      <title>The Developer’s Guide to Relationship-based Access Control</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Thu, 22 Apr 2021 10:35:33 +0000</pubDate>
      <link>https://forem.com/petermorlion/the-developer-s-guide-to-relationship-based-access-control-5d17</link>
      <guid>https://forem.com/petermorlion/the-developer-s-guide-to-relationship-based-access-control-5d17</guid>
      <description>&lt;p&gt;If you’ve never heard of ReBAC (relationship-based access control), that’s fine. It’s not too difficult and we’ll walk you through it. Chances are, you’re already using this model in your current applications! Allow us to tell you why ReBAC is such an interesting model for access control and how you can start implementing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ReBAC?
&lt;/h2&gt;

&lt;p&gt;Relationship-based access control is a model where access decisions are based on the relationships a subject has. When the subject (often a user, but possibly also a device or application) wants to access a resource, our system will either allow or deny this access based on the specific relationships the subject has.&lt;/p&gt;

&lt;p&gt;Probably the most well-known examples of relationship-based access control are social networks. In Facebook, for example, you can allow access to your posts and photos to friends of friends. We can easily draw this in a diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xw501iaj3y7zy6ju1jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xw501iaj3y7zy6ju1jg.png" alt="Relationships"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My friends can see my posts. Friends of my friends can see my posts. But friends of those friends cannot because they have the wrong relationship path to my post (one too many steps).&lt;/p&gt;

&lt;p&gt;So, in ReBAC, we don’t allow access because someone has a certain role (e.g., a user in the “Human Resources” group). We allow access because they have certain relationships with other entities in our system.&lt;/p&gt;

&lt;p&gt;ReBAC is often explained in academic literature in reference to social networks because, by definition, they contain a network of relationships. But ReBAC isn’t applicable to social networks alone. In fact, you probably already have a network of relationships in your database.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReBAC already in your System
&lt;/h2&gt;

&lt;p&gt;Many applications already work with a database that contains a network of entities that have relationships with each other. We can clearly see this in relational databases. For example, take a look at this database schema:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8scxqt0hykh64ahkpmxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8scxqt0hykh64ahkpmxp.png" alt="Database schema"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The therapist has a “is_therapist_of” relationship with a student, who has a “is_member_of” relationship with the class, which in turn has a relationship with the timetable. In classic authorization models, we would give a therapist access to the timetable by adding them to a certain group. That is, we could give them the “timetable_viewer” role.&lt;/p&gt;

&lt;p&gt;This access may not be granular enough though. What if we only want to give the therapist access to the class timetable of a student for which they are the therapist? If we give them the “timetable_viewer” role, they could have access to all timetables.&lt;/p&gt;

&lt;p&gt;We could easily run a query to see if the therapist has the correct relationship. In SQL, this could look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT t.Id, tt.Id
FROM Therapist t
INNER JOIN Student s ON s.TherapistId = t.Id
INNER JOIN Class c ON s.ClassId = c.Id
INNER JOIN Timetable tt ON tt.ClassId = c.Id
WHERE t.Id = @TherapistId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would give you a list of timetables that the given therapist has access to, and the reason they have access is because they are the therapist of certain students in classes for those timetables. This is what we call a &lt;em&gt;policy&lt;/em&gt;: a way of expressing who has access to content in your application.&lt;/p&gt;

&lt;p&gt;Your code may already contain several such policies, but this doesn’t scale easily. For every change to the policies, the developers need to make changes in the code. You could build a system where you can easily make changes, even with a good UI. But you wouldn’t write your own database, so why would you invest time and money into an authorization system when you should be focussed on your business logic and on the areas where your company or organization makes a difference?&lt;/p&gt;

&lt;p&gt;Another downside of this approach, and why it doesn’t scale, is that we’re tightly coupling our policies to our service implementations. So, let’s look at how we can put our policies in a separate context, using Scaled Access.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReBAC with Scaled Access
&lt;/h2&gt;

&lt;p&gt;This is a simplified version of what our application architecture often looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb3t73gxsqn4bwg1ux40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb3t73gxsqn4bwg1ux40.png" alt="App architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Authorization decisions are now located in our code, based on what’s in our database. With Scaled Access, we can design our policies in the user interface, based on the network we’ve uploaded to the platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlkjt0qp20q3d2krh0fh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlkjt0qp20q3d2krh0fh.png" alt="Scaled Access"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This doesn’t imply that we need to replace our existing database with the Scaled Access platform, but the platform does need a way of knowing what the network of entities and relationships looks like. Once we have that data, designing policies is easy.&lt;/p&gt;

&lt;p&gt;Now that we have our data and our policies, all we need to do is call Scaled Access when a user needs access to a resource:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzp6g1x3myj65gzh1ogd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzp6g1x3myj65gzh1ogd.png" alt="App architecture with Scaled Access"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How you do this depends on how your application is hosted. In AWS, you can use &lt;a href="https://aws.amazon.com/blogs/compute/introducing-custom-authorizers-in-amazon-api-gateway/" rel="noopener noreferrer"&gt;Custom Authorizers&lt;/a&gt; and Azure offers a solution with &lt;a href="https://docs.microsoft.com/en-us/azure/api-management/policies/authorize-request-using-external-authorizer" rel="noopener noreferrer"&gt;API Management Policies&lt;/a&gt;. What you’re looking for, based on your requirements, is a way of adding custom logic to the authorization model of your cloud provider’s API solution. If the platform you run your applications and API’s on doesn’t offer an obvious way of customizing authorization, don’t hesitate to contact us. We can help you.&lt;/p&gt;

&lt;p&gt;Conceptually, all you need to add is a HTTP call to our authorization endpoint:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7izbtj8z5ox1wto2mrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7izbtj8z5ox1wto2mrc.png" alt="Policy result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The request above shows we’re checking if the user has read access to a specific timetable. The outcome “deny” shows us that they don’t.&lt;/p&gt;

&lt;h2&gt;
  
  
  But Wait! There’s More
&lt;/h2&gt;

&lt;p&gt;With Scaled Access, you can easily add relationship-based access control to your applications and API’s. It offers a simple way of designing and applying policies, but we didn’t stop there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Invitations
&lt;/h3&gt;

&lt;p&gt;Scaled Access has a built-in system of invitation, so users can invite others to accept a relationship in the application. In our example, we could have a teacher invite a new therapist to accept an “is_therapist_of” relationship with a student. If the therapist accepts, they will automatically have access to the timetable. You don’t need to implement this invitation flow yourself. You just need to call the necessary endpoint to send the invite, and we’ll do the rest.&lt;/p&gt;

&lt;p&gt;In fact, we dogfood our system to manage our tenants and customers. When a new customer is added, we send them an invitation through the same flow that you use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Token Enrichment
&lt;/h3&gt;

&lt;p&gt;If you’re already using an external Identity Provider like Auth0 or Okta, we provide integration to enrich the access token that your users receive. After things are set up, you’ll be able to add a user’s relationship to their token. This is what the payload of such a decoded token could look like:&lt;/p&gt;

&lt;p&gt;The direct relationships are already present in the token, allowing you to make simple authorization decisions without any extra calls.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "https://scaledaccess.com/relationships": [
    {
      "relationshipType": "is_therapist_of",
      "to": {
        "id": "2484519d-2e4a-4e2b-89e7-f5e7d6955634",
        "type": "student",
        "name": "Elizabeth"
      }
    }
  ],
  "iss": "https://petermorlion-relbac-helloworld.eu.auth0.com/",
  "sub": "auth0|605318e5a9b06e006a7e5092",
  "aud": "https://helloworld.ropc.example.com",
  "iat": 1616667136,
  "exp": 1616753536,
  "azp": "t2OGcArqg6EGvdPW9XkZ6BLDbbCdqv6c",
  "gty": "password"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interested in token enrichment, but not working with Auth0 or Okta? Let us know, and we’ll help you set things up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Policies
&lt;/h3&gt;

&lt;p&gt;If you’re thinking, “great, but my policies won’t easily transfer to the designer in your UI,” we’ve got you covered. Behind the scenes, we’re using the &lt;a href="https://www.openpolicyagent.org/docs/latest/policy-language/" rel="noopener noreferrer"&gt;Rego language&lt;/a&gt;. What you design in the UI is translated to Rego, a language to define access policies. Then, when you need a decision from the authorization endpoint, we evaluate this Rego policy with the necessary data as input (e.g., a user and their relationships).&lt;/p&gt;

&lt;p&gt;But, if the designer doesn’t fit your needs or taste, you can skip it and just upload your custom Rego policy. Check out &lt;a href="https://docs.scaledaccess.com/?path=create-update-policy#create-update-policy" rel="noopener noreferrer"&gt;the docs&lt;/a&gt; for more on that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaled Access Makes ReBAC Easy
&lt;/h2&gt;

&lt;p&gt;Relationship-based access control is more than just a model for social networks. Your database likely contains a network of entities and their relationships already. With ReBAC, you can model authorization policies based on these relationships. ReBAC offers more than role-based access control because it allows policies based on multi-step relationships between entities and decisions for specific entities, not entity types (e.g., you have access to this specific timetable, not all timetables).&lt;/p&gt;

&lt;p&gt;Additionally, Scaled Access offers you a platform to manage and configure your relationship-based access control, which gives you more time to focus on what’s important for your business.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post originally appeared on &lt;a href="https://www.scaledaccess.com/whitepapers/the-developers-guide-to-relationship-based-access-control/" rel="noopener noreferrer"&gt;the Scaled Access website&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>authorization</category>
      <category>rebac</category>
      <category>security</category>
    </item>
    <item>
      <title>The Human Side of Legacy Code</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Thu, 11 Mar 2021 11:38:00 +0000</pubDate>
      <link>https://forem.com/petermorlion/the-human-side-of-legacy-code-3o0</link>
      <guid>https://forem.com/petermorlion/the-human-side-of-legacy-code-3o0</guid>
      <description>&lt;p&gt;I’ve written several articles on the technical sides of legacy code and technical debt. But now, let’s focus on the emotional side, the human factor. I’ve touched on this previously, but would like to expand on it a bit further.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Frustration Can Lead To
&lt;/h2&gt;

&lt;p&gt;Working with legacy code can be frustrating. Especially if there is not enough room to improve it. Or if there is room, but the team can’t seem to succeed in improving, for whatever reason.&lt;/p&gt;

&lt;p&gt;Frustration that is left unaddressed for too long leads to all kinds of bad things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qqJxLUDK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2021/03/Negative-feelings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qqJxLUDK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2021/03/Negative-feelings.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers can start taking things personally. Any critique on their code can emotionally hurt them.&lt;/p&gt;

&lt;p&gt;On the other hand, they might think that others are writing bad code. This is a mechanism called “superiority bias”, where we tend to overestimate our good aspects and underestimate those of others.&lt;/p&gt;

&lt;p&gt;In a context where there already is frustration simmering, this can lead to finger-pointing, heated discussions and in a bad case, people leaving the company.&lt;/p&gt;

&lt;p&gt;To make matters worse, you’ll lose the most talented people first, because they have less trouble finding new jobs. This leaves your team with the less skilled developers, making it even more difficult to get that legacy project back on track! This is called the Dead Sea Effect. It’s not a good situation and non-trivial to get out of.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calm Down
&lt;/h2&gt;

&lt;p&gt;So we started with a bit of frustration and ended up with the team falling apart. This is possible, and I have seen it, but not every situation escalates so extremely.&lt;/p&gt;

&lt;p&gt;Regardless, the frustration will lead to personal vendetta’s, infighting, reduced productivity, blaming, and finger-pointing.&lt;/p&gt;

&lt;p&gt;It doesn’t have to be this way. While there will always be some frustration about bad code, we can empower teams to tackles these pain points and make them approach the problems in a more positive way.&lt;/p&gt;

&lt;p&gt;But that requires some effort from the organization and management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledging Emotions
&lt;/h2&gt;

&lt;p&gt;Management and team leaders (and team members) should acknowledge the negative feelings that code can cause and there should be room for these emotions.&lt;/p&gt;

&lt;p&gt;But the same leaders play an important part in channeling these emotions. This can be done by getting together with the team, acknowledging the fact that there is a problem, and providing a way out.&lt;/p&gt;

&lt;p&gt;The way out can take on many forms, but it will take time and money and team effort. Allowing the team to take their time to tackle the technical debt is crucial. It means that team leaders and management must take away a bit of the pressure that comes with constantly focussing on new features.&lt;/p&gt;

&lt;p&gt;With this, the team has room to breath and won’t feel stuck in a situation where the only real way out is to leave the company.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.redstar.be/the-human-side-of-legacy-code/"&gt;The Human Side of Legacy Code&lt;/a&gt; appeared first on &lt;a href="https://www.redstar.be"&gt;Red Star IT - Helping Managers Tackle Legacy Code&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>blog</category>
    </item>
    <item>
      <title>Escaping the Catch-22 of Anti-Test Arguments </title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Tue, 03 Mar 2020 10:32:11 +0000</pubDate>
      <link>https://forem.com/petermorlion/escaping-the-catch-22-of-anti-test-arguments-2gk4</link>
      <guid>https://forem.com/petermorlion/escaping-the-catch-22-of-anti-test-arguments-2gk4</guid>
      <description>&lt;p&gt;There's a &lt;a href="https://en.wikipedia.org/wiki/Catch-22"&gt;Catch-22&lt;/a&gt; hidden in the arguments that many people use to rationalize not writing tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Catch
&lt;/h2&gt;

&lt;p&gt;A Catch-22 is a situation that you can't escape out of due to contradictory rules or limitations. In case of automated tests for software, the arguments often go like this.&lt;/p&gt;

&lt;p&gt;At the start of the project, both developers and managers say that the project is too young and changing all the time. There's also market pressure to get something minimal out there as fast as possible. So there's no time to write tests.&lt;/p&gt;

&lt;p&gt;But once the project has matured, the code is harder to put under test, the developers haven't adopted the necessary habits to write tests, and surprise surprise, the market pressure is still there.&lt;/p&gt;

&lt;p&gt;So no tests are ever written. Unfortunately, this leads to increasingly complex and highly coupled code that is increasingly harder to test. Progress slows down and frustration follows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Escaping the Negative Cycle
&lt;/h2&gt;

&lt;p&gt;So when should we start writing tests? When people ask me that question, the answer is usually now. Right now.&lt;/p&gt;

&lt;p&gt;If your project is only getting started, it can be OK to write some code without tests: a proof-of-concept, a very small MVP to see if it works, etc. Just to see if and how things would work out.&lt;/p&gt;

&lt;p&gt;But this period is shorter than most would expect. In 2-3 weeks, a team of average skill can get something up an running and then they should start taking things seriously. This includes writing tests.&lt;/p&gt;

&lt;p&gt;An experienced team (with TDD experience) will be able to write code that can be tested afterwards with minimal effort. But even they should not allow that “test-less” time to take too long. Inevitably, code will become coupled and difficult to test.&lt;/p&gt;

&lt;h2&gt;
  
  
  But I'm Not Allowed!
&lt;/h2&gt;

&lt;p&gt;What if management or some lead dev/architect forbids you from writing tests?&lt;/p&gt;

&lt;p&gt;My advice? Still do it.&lt;/p&gt;

&lt;p&gt;First, management shouldn't be telling you how to write code, only what to implement. Lead developers and architects are different, but they wouldn't tell you which keyboard layout or IDE to use, would they? Testing can be seen as a tool to write code. Instead of running the application to verify your changes, you can say you run automated tests.&lt;/p&gt;

&lt;p&gt;You can then also explain why this works faster for you. There is less manual testing and debugging to be done. It also leads to better code quality and stops other developers from breaking features you've implemented.&lt;/p&gt;

&lt;p&gt;But of course, that assumes there is room for rational arguments. If there isn't, do you really want to work there? With companies screaming desperately for developers, you should be able to find a better place to work. There's a reason there are so many recruiters in the IT industry.&lt;/p&gt;

&lt;p&gt;Of course, this might be different in your specific situation or region, but in general, there are better jobs out there for developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Now
&lt;/h2&gt;

&lt;p&gt;So regardless of how old your project is and regardless of what others tell you, my advice is to start writing tests now.&lt;/p&gt;

&lt;p&gt;If you have a legacy project and don't know where to start, try to find a small piece of code that is isolated and write a test for that. It'll allow you to set up the necessary CI infrastructure giving you the foundations to continue.&lt;/p&gt;

&lt;p&gt;Once you start writing tests, you'll get better at it and start noticing other places where you can add tests.&lt;/p&gt;

&lt;p&gt;Often, it just takes a small push to get others to start writing tests as well. A simple first test written by one developer often leads to more and more tests written by the entire team.&lt;/p&gt;

</description>
      <category>legacycode</category>
      <category>tdd</category>
      <category>testing</category>
      <category>uncategorized</category>
    </item>
    <item>
      <title>Timed vs Scoped Releases</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Wed, 19 Feb 2020 07:24:44 +0000</pubDate>
      <link>https://forem.com/petermorlion/timed-vs-scoped-releases-1jeg</link>
      <guid>https://forem.com/petermorlion/timed-vs-scoped-releases-1jeg</guid>
      <description>&lt;p&gt;I want to add one more thought to my previous post on &lt;a href="https://www.redstar.be/how-to-achieve-a-weekly-or-faster-software-release-cycle/"&gt;how to achieve weekly (or faster) releases&lt;/a&gt;. It’s about the difference between timed releases and scoped releases.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Little Background
&lt;/h2&gt;

&lt;p&gt;You can read my &lt;a href="https://www.redstar.be/how-to-achieve-a-weekly-or-faster-software-release-cycle/"&gt;previous post&lt;/a&gt;, but to summarize: a client of mine went from irregular releases that had a specific scope to weekly releases without a predefined scope. Whichever feature or change is done makes it into the release. If something isn’t finished, it’s not a big deal because there is a new release next week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scoped Releases
&lt;/h2&gt;

&lt;p&gt;A scoped release is simply where an organization selects a list of features that should be released together. There might be a rough estimate of when the release will be finished, but the important thing is that the scope of the release is defined.&lt;/p&gt;

&lt;p&gt;If the features aren’t finished by the estimated date, the release can be postponed until everything if finished, tested and approved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--npYEznMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/scoped-release.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--npYEznMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/scoped-release.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Timed Releases
&lt;/h2&gt;

&lt;p&gt;Timed releases happen when the organization determines when the next release should happen, regardless of what features are finished. The organization releases the completed features and postpones unfinished features to a next release.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZpCHO4VL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/timed-release.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZpCHO4VL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/timed-release.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scoped and Timed Releases
&lt;/h2&gt;

&lt;p&gt;You could of course combine scoped and timed releases. This happens when the organization defines both a date when to release the new version &lt;em&gt;and&lt;/em&gt; the list of features that should make it into said release.&lt;/p&gt;

&lt;p&gt;This is a tricky choice, because the chances of not finishing on time are high. Estimating the date of completion in software is extremely difficult and it gets more difficult the further in the future you try to estimate.&lt;/p&gt;

&lt;p&gt;So if you ask the team when they’ll finish a list of features and then set a deadline, you’re bound to miss the deadline or exhaust your team. Especially because we tend to underestimate the time it will take to finish features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O0ljqtKr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/scoped-and-timed-release.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O0ljqtKr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/scoped-and-timed-release.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Should I Choose?
&lt;/h2&gt;

&lt;p&gt;Many organizations have chosen the combined option. But in my experience, you can’t have both.&lt;/p&gt;

&lt;p&gt;So you’ll have to choose between timed or scoped releases.&lt;/p&gt;

&lt;p&gt;Timed releases have the advantage of being predictable: everyone knows when a new version will be deployed. But a team might choose to spread out the releases too far, leading to so-called big bang releases which increase the chances of disaster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_ParMTuB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/Timed-or-scoped-releases.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_ParMTuB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2020/02/Timed-or-scoped-releases.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The advantage of scoped releases is that you release a feature the moment it is done and you get feedback very fast. A disadvantage could be that the organization isn’t ready for the product changing at such a rapid pace.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Best For Legacy Projects?
&lt;/h2&gt;

&lt;p&gt;If you’re working on a project with technical debt and a long history, I recommend choosing timed releases. Try to release at least once a month and then try to evolve to &lt;a href="https://www.redstar.be/how-to-achieve-a-weekly-or-faster-software-release-cycle/"&gt;weekly releases&lt;/a&gt; later.&lt;/p&gt;

&lt;p&gt;You’ll have to take a good look at your release process and automate it thoroughly: testing, database changes, deployment, monitoring, etc. This might be some work, but you’ll save in time, effort and reduced human errors.&lt;/p&gt;

&lt;p&gt;This will also stop you from creating a large list of features that must make it in the release. These big bang releases increase the risk of bugs and these bugs are often more difficult to track down. Timed releases force you to improve the product in small increments.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.redstar.be/timed-vs-scoped-releases/"&gt;Timed vs Scoped Releases&lt;/a&gt; appeared first on &lt;a href="https://www.redstar.be"&gt;Red Star IT - Helping Managers Tackle Legacy Code&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>blog</category>
    </item>
    <item>
      <title>How to Achieve a Weekly (Or Faster) Software Release Cycle</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Mon, 10 Feb 2020 08:06:33 +0000</pubDate>
      <link>https://forem.com/petermorlion/how-to-achieve-a-weekly-or-faster-software-release-cycle-21ol</link>
      <guid>https://forem.com/petermorlion/how-to-achieve-a-weekly-or-faster-software-release-cycle-21ol</guid>
      <description>&lt;p&gt;A client of mine started out with random ad-hoc releases with frequent regression bugs and moved successfully towards weekly releases. Here are the main points that helped us achieve this.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Previous Situation
&lt;/h2&gt;

&lt;p&gt;Previously, a developer would finish some feature and mark the issue or ticket as “Done.” But it was usually a mystery when the feature would make it to the next stage, let alone production. This was mildly frustrating because the developer wouldn’t know for sure if the feature worked as desired.&lt;/p&gt;

&lt;p&gt;When it was time to deploy to a stage, the project manager would also ask developers which features would be activated when we deployed the product.&lt;/p&gt;

&lt;p&gt;And apart from not having a clear view in what was being release when, we also regularly had regression bugs after releasing: things that used to work had stopped working suddenly.&lt;/p&gt;

&lt;h2&gt;
  
  
  A First Step: Automated Tests
&lt;/h2&gt;

&lt;p&gt;To increase the confidence in our releases and to reduce the chance of regression bugs, I was tasked with writing a suite of &lt;a href="https://www.redstar.be/why-do-i-need-automated-tests/"&gt;automated tests&lt;/a&gt;. This gave us a safety net to make changes. The team soon picked up and added more tests.&lt;/p&gt;

&lt;p&gt;What’s nice is that we wrote several of them in &lt;a href="https://www.redstar.be/can-i-write-tests/"&gt;a language that the project manager could understand and verify&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Having automated tests in place reduced the amount of regression bugs significantly. This increased management’s confidence to perform the next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Increasing Release Cadence
&lt;/h2&gt;

&lt;p&gt;We switched to a production release every 3 weeks. Each week, we would move a version of the software up one stage. We have 3 stages (Integration, Staging and Production), so finishing a feature meant it would be in Production 3 weeks later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Information About The Release
&lt;/h2&gt;

&lt;p&gt;In our project management system, we started tagging our tasks with releases. Whenever a developer finished a task, they would add the tag of the next release.&lt;/p&gt;

&lt;p&gt;We can now easily run a query that lists all tasks for the upcoming release. Everything that is done goes in the next release. What isn’t finished will have to wait for the next release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and Automated Alerts
&lt;/h2&gt;

&lt;p&gt;Automated tests will reduce the chances of bugs, but they won’t avoid them altogether. So we set up monitoring to monitor the health of our system. When a certain threshold of errors was exceeded, we would automatically be notified.&lt;/p&gt;

&lt;p&gt;This system surfaced some extra bugs and issues that were previously unknown. We wrote tests and fixed the issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Increasing Release Speed
&lt;/h2&gt;

&lt;p&gt;We ran with this for some months. Regression bugs have been practically eliminated and with the &lt;strong&gt;increased confidence&lt;/strong&gt; , we have decided to make the release cycle smaller. We now release a feature to Integration and Staging in the same week and to Production the next week.&lt;/p&gt;

&lt;p&gt;So far, this is working without issues. While we’re not releasing a finished feature in the same week yet, the systems and practices we’ve set up should allow us to do so in the future.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.redstar.be/how-to-achieve-a-weekly-or-faster-software-release-cycle/"&gt;How to Achieve a Weekly (Or Faster) Software Release Cycle&lt;/a&gt; appeared first on &lt;a href="https://www.redstar.be"&gt;Red Star IT - Helping Managers Tackle Legacy Code&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>blog</category>
    </item>
    <item>
      <title>Common Ways People Destroy Their Log Files</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Tue, 28 Jan 2020 17:00:45 +0000</pubDate>
      <link>https://forem.com/scalyr/common-ways-people-destroy-their-log-files-nem</link>
      <guid>https://forem.com/scalyr/common-ways-people-destroy-their-log-files-nem</guid>
      <description>&lt;p&gt;For this article, I'm going to set up a hypothetical scenario (but based on reality) that needs logging. We're writing an application that automates part of a steel factory. In our application, we need to calculate the temperature to which the steel must be heated. This is the responsibility of the TemperatureCalculator class.&lt;/p&gt;

&lt;p&gt;The class is fed a lot of parameters that come from external sensors (like current temperature of the furnace, weight of the steel, chemical composition of the steel, etc.). The sensors sometimes provide invalid values, forcing us to be creative. The engineers said that, in such a case, we should use the previous value. This isn't something that crashes our application, but we do want to log such an event.&lt;/p&gt;

&lt;p&gt;So the team has set up a simple logging system, and the following line is appended to the log file:&lt;/p&gt;

&lt;pre class=""&gt;An invalid value was provided. Using previous value.&lt;/pre&gt;

&lt;p&gt;Let's explore how this well-meant log message doesn't actually help. In fact, combined with similar messages in our log file, the log file ends up being a giant, useless mess.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jcZvj7vq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://library.scalyr.com/2018/07/19175347/Trash-Fire-Depicting-Way-People-Destroy-Log-Files.png" class="article-body-image-wrapper"&gt;&lt;img class="aligncenter  wp-image-746" src="https://res.cloudinary.com/practicaldev/image/fetch/s--jcZvj7vq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://library.scalyr.com/2018/07/19175347/Trash-Fire-Depicting-Way-People-Destroy-Log-Files.png" alt="" width="246" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Not Logging Descriptively&lt;/h2&gt;

&lt;p&gt;Of course, the first thing you'll see is that we don't have a lot of extra information. What is an invalid value exactly? "Null" would probably be invalid. But what about values we know are too high or too low for our calculations? What's the previous value we'll use? Who provided the invalid value?&lt;/p&gt;

&lt;p&gt;These are probably all things we can find out by looking in other places (other logs, the code, the UI, etc.). But we could make our lives easier by adding some context.&lt;/p&gt;

&lt;p&gt;What if we change the line to&lt;/p&gt;

&lt;pre class=""&gt;The Sensor3000 provided an invalid value (0). This would lead to a DivideByZeroException. Falling back to previous value (88.4).&lt;/pre&gt;

&lt;p&gt;That's a lot better, isn't it?&lt;/p&gt;

&lt;h2&gt;Forgetting Timestamps&lt;/h2&gt;

&lt;p&gt;Say there was an issue far down the line of the production facility. Some batch of steel was of a low quality and had to be discarded. Making steel is a complex process that consists of many steps and many machines, so management wants everyone to check their applications for problems that occurred yesterday between 4 and 6 PM.&lt;/p&gt;

&lt;p&gt;How can we find that in our logging message? We have no way of finding out when the messages occurred. Obviously, the solution is to add timestamps:&lt;/p&gt;

&lt;pre class=""&gt;2018-01-22 20:14:35.9834 The Sensor3000 provided an invalid value (0). This would lead to a DivideByZeroException. Falling back to previous value (88.4).&lt;/pre&gt;

&lt;p class=""&gt;We can now quickly and easily find all messages inside any date and time interval.&lt;/p&gt;

&lt;p&gt;It also provides us some insight into possible performance problems. These two lines could indicate a problem with the network or with the external sensor:&lt;/p&gt;

&lt;pre class=""&gt;2018-01-22 20:16:02.3947 Connecting to Sensor3000
2018-01-22 20:28:44.9771 Connected to Sensor3000
&lt;/pre&gt;

&lt;p class=""&gt;Twelve minutes to connect to the Sensor3000?! We know it's usually under one minute!&lt;/p&gt;

&lt;h2&gt;Not Using Log Levels&lt;/h2&gt;

&lt;p&gt;We now have descriptive log messages, but there's no way of instantly seeing how important the message is. The message about an invalid value could be something we find quite important (although not critical). But let's assume it's surrounded by hundreds of lines like this:&lt;/p&gt;

&lt;pre class=""&gt;2018-01-22 20:13:57.6290 Started heating steel order 08931-09
2018-01-22 20:21:36.9318 Finished heating steel order 08953-03. Heated to 1257°C.&lt;/pre&gt;

&lt;p&gt;These are more informational messages, while others are warnings that something abnormal happened. It's even worse if we have messages like the following, buried in a flood of less important information:&lt;/p&gt;

&lt;pre class=""&gt;2018-01-22 20:23:12.3087 NullReferenceException: Object reference not set to an instance of an object.&lt;/pre&gt;

&lt;p&gt;This is an exception that happened in our application, possibly sending it into an invalid state.&lt;/p&gt;

&lt;p&gt;I think you get the picture. Messages have different levels of importance, which is why log levels were invented. Use them correctly and you'll see messages stand out:&lt;/p&gt;

&lt;pre class=""&gt;2018-01-22 20:13:57.6290 INFO Started heating steel order 08931-09
2018-01-22 20:14:35.9834 WARN The Sensor3000 provided an invalid value (0). This would lead to a DivideByZeroException. Falling back to previous value (88.4).
2018-01-22 20:21:36.9318 INFO Finished heating steel order 08953-03. Heated to 1257°C.
2018-01-22 20:23:12.3087 ERROR NullReferenceException: Object reference not set to an instance of an object.&lt;/pre&gt;

&lt;p&gt;Using log levels correctly also allows you to treat the messages differently. The &lt;code&gt;WARN&lt;/code&gt; message from above might be something you want to email to the engineers or to the technicians of the Sensor3000. The &lt;code&gt;ERROR&lt;/code&gt; message is something you should probably be emailing to your team so you can fix the issue and avoid it in the future.&lt;/p&gt;

&lt;h2&gt;Not Logging Enough&lt;/h2&gt;

&lt;p&gt;At the very least, you should be logging exceptions. But when things go wrong, this will probably not be enough. You might need some context as to what happened before the exception occurred and maybe also what happened afterwards.&lt;/p&gt;

&lt;p&gt;When writing your code, don't just think about how the code is going to run. Also try to think about what information you'd like to see long after the code has run. When you only have an exception message to go off of, finding the cause of an issue will require walking through the code, making a mental model of what happened. This is very hard because there will be all kinds of things that you need to add to that mental model. You'll also need to make a lot of assumptions.&lt;/p&gt;

&lt;p&gt;Logging will show you a good history of what actually happened.&lt;/p&gt;

&lt;h2&gt;Logging the Wrong Things&lt;/h2&gt;

&lt;p&gt;There are certain use cases that developers solve with logging, even though there are better solutions.&lt;/p&gt;

&lt;p&gt;One I see often is logging the performance of problematic methods. While this can be OK for a one-time case, you should avoid having your application (and logs) full of these log messages. Investigate performance profiling tools and application performance monitoring tools. They come at a price but will be better able to help you.&lt;/p&gt;

&lt;p&gt;Here's another example of the wrong things to be logging: user data and behavior. I've seen log files full of JSON content that was sent to the server. At some point in time, this was added so there was user data that could be analyzed. However, this filled the log files and, in doing so, obscured the lines about the application's health. A better option here was to use a user analytics tool, which we ended up switching to.&lt;/p&gt;

&lt;p&gt;Because logging is usually already set up, it's an easy entry point to use for these (and similar) use cases. But logging should be about how your application is running. For other data, there are better solutions.&lt;/p&gt;

&lt;h2&gt;
&lt;img class="size-full wp-image-747 alignright" src="https://res.cloudinary.com/practicaldev/image/fetch/s--vdd43YuB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://library.scalyr.com/2018/07/19175347/Pull-quote-What-is-the-correct-amount-of-logging.png" alt="" width="294" height="182"&gt;Logging Too Much&lt;/h2&gt;

&lt;p&gt;I apologize if this is confusing. First, I'm telling you to make sure you log enough. Then I tell you not to log the wrong things. And now I'm warning you not to log too much.&lt;/p&gt;

&lt;h2&gt;&lt;/h2&gt;

&lt;p&gt;So what is the correct amount of logging? This is something you'll have to find out as you go. It also depends on factors like the stability and complexity of the application.&lt;/p&gt;

&lt;p&gt;But I've seen log files where every detail of the application was logged, forcing you to scroll endlessly to get the bigger picture of what was happening. Too much detail will overwhelm you.&lt;/p&gt;

&lt;p&gt;Here, log levels can help again. You could use the lower DEBUG and TRACE levels to have an extremely detailed log in case you need it and write those levels to separate files. But I would only advise logging these levels in production if absolutely necessary.&lt;/p&gt;

&lt;h2&gt;Not Cleaning Up Log Files&lt;/h2&gt;

&lt;p&gt;Once you have decent logging in place, your application will be spitting out log files every day. Depending on the size of your application, this will amount to quite some megabytes or even gigabytes over time.&lt;/p&gt;

&lt;p&gt;Consider having an automated way of cleaning up log files. You probably won't be interested in log files from more than a year ago. If there's still useful data in those files, a better option is to store this data elsewhere, like I mentioned above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.scalyr.com/blog/log-file-too-big/"&gt;Cleaning up log files&lt;/a&gt; is basic good housekeeping. I have encountered multiple applications in my career that filled up the entire hard disk with log files. This stopped multiple applications on the server from working.&lt;/p&gt;

&lt;p&gt;Most popular frameworks provide convenient configuration options to perform this automatic cleanup.&lt;/p&gt;

&lt;h2&gt;Evaluate and Evolve&lt;/h2&gt;

&lt;p&gt;Inevitably, you will encounter situations where you could have used more logging. And after some time, you will feel that other log messages are no longer necessary.&lt;/p&gt;

&lt;p&gt;This is why your logging is subject to the same process as your production code. Evaluate your logs from time to time and let them evolve. Add more logging when it's deemed necessary, and don't be afraid of removing log statements when you feel they just crowd the log files.&lt;/p&gt;

</description>
      <category>logging</category>
    </item>
    <item>
      <title>How to Get Away with Unit Testing Legacy Code </title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Fri, 20 Dec 2019 07:29:35 +0000</pubDate>
      <link>https://forem.com/petermorlion/how-to-get-away-with-unit-testing-legacy-code-25o9</link>
      <guid>https://forem.com/petermorlion/how-to-get-away-with-unit-testing-legacy-code-25o9</guid>
      <description>&lt;p&gt;A while ago, I did a webinar for TypeMock about unit testing legacy code. It’s about why we want to unit test legacy code, the advantages and disadvantages, and it includes some minor live coding using TypeMock’s Isolator tool.&lt;/p&gt;

&lt;p&gt;You can watch it here:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/372193404" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;I hope you like it. Let me know what you think!&lt;/p&gt;

</description>
      <category>net</category>
      <category>legacycode</category>
      <category>tdd</category>
      <category>testing</category>
    </item>
    <item>
      <title>Backward Compatibility in Software Development: What and Why</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Wed, 18 Dec 2019 09:10:14 +0000</pubDate>
      <link>https://forem.com/petermorlion/backward-compatibility-in-software-development-what-and-why-1o1e</link>
      <guid>https://forem.com/petermorlion/backward-compatibility-in-software-development-what-and-why-1o1e</guid>
      <description>&lt;p&gt;Backward compatibility in software development is an important concept that is often overlooked, especially in legacy systems. This leads to stressful software updates and regression bugs. But it’s usually not so difficult to avoid these horror scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  What
&lt;/h2&gt;

&lt;p&gt;If you’re unfamiliar with the concept, backward compatibility means making sure a new version of software keeps working with the current version of an external system.&lt;/p&gt;

&lt;p&gt;Backward compatibility is important when you’re going to update your software. In this day and age, it’s very likely that your software integrates with other systems like a database or a service from another team or company.&lt;/p&gt;

&lt;p&gt;When the new version of your software makes calls to or accepts calls from this system in a way that requires that system to change as well, then you have a “breaking change.”&lt;/p&gt;

&lt;p&gt;If however, you make sure your update can work with both the new and the old way of making/accepting calls, then you’re backwards compatible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Consequences
&lt;/h2&gt;

&lt;p&gt;If you’re going to introduce a breaking change, you’ll have to coordinate releases. The update to system A will have to be done simultaneously with the update to system B.&lt;/p&gt;

&lt;p&gt;This is a risky situation.&lt;/p&gt;

&lt;p&gt;First, it’s difficult to coordinate releases. Especially, if another company is involved. Even more so, if you’re in different time zones.&lt;/p&gt;

&lt;p&gt;But it’s also stressful because things can go wrong easily. And when they do, it’s difficult to fix the situation. If there’s a bug in the integration, both systems might need to be fixed, requiring even more coordination. Stress leads to frustration, which leads to angers, which leads to bad relationships.&lt;/p&gt;

&lt;p&gt;It’s even worse if one system needs to be rolled back for whatever reason. Then the other needs to be rolled back as well. This might not always be possible, because the new version may have side effects that can’t be undone: new data structures, emails sent, lost data, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;So how can you perform updates that involve breaking changes? Well, you don’t. You make your updates backward compatible.&lt;/p&gt;

&lt;p&gt;Actually, you can perform breaking changes, you just do it in two steps. First, you update a system to a state that it works with both the old and the new way of integrating:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WWYMeWDa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/backward-compatibility-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WWYMeWDa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/backward-compatibility-1.png" alt=""&gt;&lt;/a&gt;System B accepts call from system A. The old way (the diamond) is still acceptable, but there is also a new way (the circle).&lt;/p&gt;

&lt;p&gt;Then you leave some time for the external system to start using the new way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GYJDfCVz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/backward-compatibility-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GYJDfCVz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/backward-compatibility-2.png" alt=""&gt;&lt;/a&gt;System A now calls system B using the new way (the circle).&lt;/p&gt;

&lt;p&gt;When it has switched, you can safely drop support for the old way of integrating:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j9OjVLSl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/backward-compatibility-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j9OjVLSl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/backward-compatibility-3.png" alt=""&gt;&lt;/a&gt;Support for the old way (the diamond) has now been dropped.&lt;/p&gt;

&lt;p&gt;This removes all stress from deploying a breaking change (though not necessarily from the release itself). And it requires little coordination, just some communication about the timeline for the two steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking Changes In Legacy Systems
&lt;/h2&gt;

&lt;p&gt;Legacy systems are often plagued with regression bugs: bugs that occur &lt;a href="https://www.redstar.be/new-bugs-on-every-release/"&gt;after every new release&lt;/a&gt;, even though the features worked fine before. In legacy systems it’s important to reduce risks as much as possible. Avoiding breaking changes is one important technique to do so.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.redstar.be/backward-compatibility-in-software-development-what-and-why/"&gt;Backward Compatibility in Software Development: What and Why&lt;/a&gt; appeared first on &lt;a href="https://www.redstar.be"&gt;Red Star IT - Helping Managers Tackle Legacy Code&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>blog</category>
    </item>
    <item>
      <title>New Bugs on Every Release?</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Mon, 09 Dec 2019 08:00:00 +0000</pubDate>
      <link>https://forem.com/petermorlion/new-bugs-on-every-release-2ig0</link>
      <guid>https://forem.com/petermorlion/new-bugs-on-every-release-2ig0</guid>
      <description>&lt;p&gt;Are you afraid of your next software release? Were the previous releases plagued with bugs? Even for features that worked previously? There are ways to avoid this, but they’re often counter-intuitive. Let’s look at what causes these bugs, and how we can break out of this destructive cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Regression Bugs
&lt;/h2&gt;

&lt;p&gt;What you’re experiencing are regression bugs. These are bugs in pieces of the software that weren’t there previously. The cause can be anything from code that has been changed to a system upgrade.&lt;/p&gt;

&lt;p&gt;If this is happening continuously, this is very frustrating for everyone involved: users, developers and managers. It seems like we’re constantly fighting small fires. It also hinders our ability to improve the product with new features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Typical Reactions
&lt;/h2&gt;

&lt;p&gt;A classic approach to deal with this problem is by tightening the screws on the development team. I’ve often seen policies like reducing the amount of releases per month or year, enforcing a stop of development until QA has tested everything thoroughly, and requiring everyone to be present for the release and to monitor it closely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nHvvckK---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/Releasing-faster-is-safer-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nHvvckK---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/12/Releasing-faster-is-safer-1.png" alt="Releasing faster is safer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This may give management a feeling that the problem is being tackled, but it remains a frustrating experience for most involved.&lt;/p&gt;

&lt;p&gt;What’s worse, it won’t make too much of a difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Tests
&lt;/h2&gt;

&lt;p&gt;One of the first things I always recommend is to build up a decent &lt;a href="https://www.redstar.be/which-automated-tests-do-i-need/"&gt;automated test suite&lt;/a&gt;. These tests should do what your QA department is doing before every release. Only, the automated tests can be run over and over again. They also take only a fraction of the time it takes QA to test everything.&lt;/p&gt;

&lt;p&gt;This doesn’t mean there is no need for the QA department anymore. They might still find bugs that the test suite didn’t find. But the automated tests give them more time for designing tests in collaboration with the developers, and for running tests that can’t be automated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Release Faster
&lt;/h2&gt;

&lt;p&gt;It may seem frightening, but once you have automated tests, you should slowly but surely be able to release more regularly. The tests will give you the confidence to do this. At a client, we changed from ad-hoc releases about once or twice a month to weekly releases. We’ve automated everything so that we could even move to faster releases, but weekly is fine for us now.&lt;/p&gt;

&lt;p&gt;Strangely, releasing faster is safer. When a release only contains small changes, the chance of breaking things is smaller. And when a bug is introduced, it’s easier to find because there are less changes.&lt;/p&gt;

&lt;p&gt;You might think that your team will have a bug on every release. You may prefer having 10 bugs in one big release, than 1 bug every week for 10 weeks straight. But if you combine this with a good automated test suite, you should find that you won’t have a bug every week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Firefighting
&lt;/h2&gt;

&lt;p&gt;Bug-free software does not exist. So you will still have to fight small fires now and then. That’s why there’s another component you really need to automate: deployment.&lt;/p&gt;

&lt;p&gt;When deploying your new software is done by the click of a button, it shouldn’t be too hard to implement a rollback procedure. That allows you to easily go back to a previous working version when you do have a regression bug.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust and Professionalism
&lt;/h2&gt;

&lt;p&gt;All of the above requires that you have quite some trust in your development team. If you don’t think you can trust them with this, then your problem is bigger than I can help you with. It’s a culture and possibly a HR problem.&lt;/p&gt;

&lt;p&gt;Professional software developers will be able to work in such an environment. They will be eager to!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.redstar.be/new-bugs-on-every-release/"&gt;New Bugs on Every Release?&lt;/a&gt; appeared first on &lt;a href="https://www.redstar.be"&gt;Red Star IT - Helping Managers Tackle Legacy Code&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>blog</category>
    </item>
    <item>
      <title>NestJS &amp; AWS Lambda Without HTTP </title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Wed, 27 Nov 2019 13:32:06 +0000</pubDate>
      <link>https://forem.com/petermorlion/nestjs-aws-lambda-without-http-189c</link>
      <guid>https://forem.com/petermorlion/nestjs-aws-lambda-without-http-189c</guid>
      <description>&lt;p&gt;At a current client, we’re looking to move (most of) our AWS Lambda functions to NestJS. The company has built up an extensive collection of Lambda functions and it’s time to bring some structure and similarity in them.&lt;/p&gt;

&lt;p&gt;But NestJS is geared towards incoming HTTP calls. This is fine if your Lambda function is behind an API Gateway, but is it possible to use NestJS if your Lambda function should be triggered by SNS events?&lt;/p&gt;

&lt;h2&gt;
  
  
  Uniformity?
&lt;/h2&gt;

&lt;p&gt;Those who know me, know I’m not a fan of forcing each team and each project in a company to follow the same structure in their code and project organization.&lt;/p&gt;

&lt;p&gt;There is never a one-size-fits-all way of organizing code that works for every team. But that’s a whole different discussion.&lt;/p&gt;

&lt;p&gt;So why would I be OK with using NestJS for all our AWS Lambda functions? Because it’s just about the framework, not about the details. We’re going to use NestJS, which recommends a certain way of programming. But it doesn’t mean we need to write all our code in the same way. There are even functions that won’t be written with NestJS because they’re so small it would be overkill.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is NestJS?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://nestjs.com/"&gt;NestJS&lt;/a&gt; is another JavaScript framework, yes. And while I don’t care for JS framework discussions, it does provide us some great benefits.&lt;/p&gt;

&lt;p&gt;Our Lamba functions were previously written in all kind of styles, depending on who wrote it. Often, they weren’t very testable.&lt;/p&gt;

&lt;p&gt;NestJS gives us a structure and some guidance that allows for clean code, decoupled components and easier testability.&lt;/p&gt;

&lt;p&gt;What’s nice is that is uses &lt;a href="https://expressjs.com/"&gt;Express&lt;/a&gt;, which we were already using.&lt;/p&gt;

&lt;p&gt;Are there other frameworks out there that provide similar or better benefits? Probably. But NestJS will do the job just nicely.&lt;/p&gt;

&lt;h2&gt;
  
  
  To HTTP or not to HTTP?
&lt;/h2&gt;

&lt;p&gt;Most of our Lambda functions are triggered by a HTTP call. If you’re not familiar with AWS, you should know that Lambda functions can be started by a variety of triggers: a HTTP call, a record being added to a database, a message being sent to AWS’s Simple Notification Service (SNS),…&lt;/p&gt;

&lt;p&gt;In most cases, we use AWS API Gateway, meaning that our Lambda functions are triggered by some HTTP call. The API Gateway then forwards the call to the relevant Lambda function.&lt;/p&gt;

&lt;p&gt;However, we have some that are only triggered by other types of events. For example, we have a function that is subscribed to an SNS topic. If you don’t know SNS, think of it as a simple messaging system: someone sends a message to a Topic and other components can subscribe to these topics.&lt;/p&gt;

&lt;p&gt;So how can we get NestJS to run without the context of a HTTP call?&lt;/p&gt;

&lt;h2&gt;
  
  
  NestJS Without HTTP
&lt;/h2&gt;

&lt;p&gt;In “regular” NestJS, you would bootstrap your application and then “listen” for HTTP calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  await app.listen(3000);
}
bootstrap()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In a Lambda function, you can use the &lt;a href="https://www.npmjs.com/package/serverless-http"&gt;serverless-http&lt;/a&gt; package to wrap your NestJS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function bootstrap() {
  const app = await NestFactory.create(AppModule, new ExpressAdapter(expressApp));
  return app;
}

// then, in your handler function:
const app = await bootstrap();
const appHandler = serverlessHttp(app);
return await appHandler(event, context);
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But that doesn’t work if there won’t be any HTTP calls coming in.&lt;/p&gt;

&lt;p&gt;Instead, we can write our Lambda as we would normally and in our handler function we can bootstrap our NestJS application, get the provider we need, and pass on the incoming data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function bootstrap() {
  const app = await NestFactory.createApplicationContext(AppModule);
  return app;
}

export async function handler(event, context) {
  const app = await bootstrap();
  const appService = app.get(AppService);
  await appService.doSomething(event);
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That’s basically it. Instead of having NestJS listen for incoming HTTP calls, we use NestJS for all the other goodies it provides (like dependency injection, separation of concerns and testability) and just get the service we need and pass in the required data.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>javascript</category>
      <category>lambda</category>
      <category>nestjs</category>
    </item>
    <item>
      <title>Avoiding Technical Debt</title>
      <dc:creator>Peter Morlion</dc:creator>
      <pubDate>Mon, 11 Nov 2019 08:00:47 +0000</pubDate>
      <link>https://forem.com/petermorlion/avoiding-technical-debt-14j0</link>
      <guid>https://forem.com/petermorlion/avoiding-technical-debt-14j0</guid>
      <description>&lt;p&gt;Is it possible to avoid technical debt when starting a new project? And if not, should we just give up? Or can we find a way of maintaining quality projects while delivering business value at a constant pace?&lt;/p&gt;

&lt;h2&gt;
  
  
  On Technical Debt
&lt;/h2&gt;

&lt;p&gt;If you’re unfamiliar with the concept, you can read about &lt;a href="https://www.redstar.be/what-is-technical-debt/"&gt;technical debt&lt;/a&gt; first. In short, it’s the cost that you will have to pay later because you’re choosing a technical solution that is cheaper to implement now. It’s like you’re taking a loan that you will pay off later.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W3ZH3ojr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.redstar.be/wp-content/uploads/2019/11/Well-crafted-piece-of-code.png" alt=""&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Planned Technical Debt
&lt;/h2&gt;

&lt;p&gt;Technical debt can be a conscious decision, like a real loan. It can make sense to incur technical debt. For example, if you need to find out if a certain feature will be used and is worth developing further. In that case, you might want to implement a minimal version quickly, to measure the interest. If it’s popular, you can pay off the technical debt by implementing it better and then expanding the feature.&lt;/p&gt;

&lt;p&gt;This is planned or intentional technical debt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unintentional Technical Debt
&lt;/h2&gt;

&lt;p&gt;But in software it can often take some time before we even realize we’ve taken the loan. We can end up in a situation where we recognize our code isn’t ideal. We’ve incurred technical debt, without realizing so.&lt;/p&gt;

&lt;p&gt;This is unintentional technical debt. It’s almost unavoidable because the world of software development evolves so fast, and because our requirements and standards change over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep Technical Debt In Mind
&lt;/h2&gt;

&lt;p&gt;So technical debt can’t be avoided. If it can, I have yet to see such a project.&lt;/p&gt;

&lt;p&gt;But if it can’t be avoided, what’s the next best thing to do? Well, any company that’s serious about software development should acknowledge this fact, and take it into account in your process.&lt;/p&gt;

&lt;p&gt;This means not trying to get the perfect design from the start. This can lead to over-engineering, lost time and money, and a harder time making changes later.&lt;/p&gt;

&lt;p&gt;This doesn’t mean we shouldn’t pay attention to quality and detail. A well-crafted piece of code isn’t a piece of code that is right immediately. It’s a piece of code that can easily be changed. If it’s right immediately, but can’t be changed easily, you’ll encounter problems later on. Because all software has to change eventually.&lt;/p&gt;

&lt;p&gt;If the software isn’t doing the correct thing, but can easily be changed, you can evolve it so that it keeps up with changing requirements.&lt;/p&gt;

&lt;p&gt;So you should aim for faster feedback and leave room for paying off the technical debt which you’re sure you will build up. This also means allowing for time to pay off the technical debt.&lt;/p&gt;

&lt;p&gt;Give the team time to continuously improve the code. If you don’t allow for this, technical debt can become too large to pay off, team morale will suffer, and so will productivity.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.redstar.be/avoiding-technical-debt/"&gt;Avoiding Technical Debt&lt;/a&gt; appeared first on &lt;a href="https://www.redstar.be"&gt;Red Star IT - Helping Managers Tackle Legacy Code&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>technicaldebt</category>
    </item>
  </channel>
</rss>
