<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: muthandir</title>
    <description>The latest articles on Forem by muthandir (@muthandir).</description>
    <link>https://forem.com/muthandir</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/muthandir"/>
    <language>en</language>
    <item>
      <title>On Communication.</title>
      <dc:creator>muthandir</dc:creator>
      <pubDate>Tue, 05 Apr 2022 13:58:35 +0000</pubDate>
      <link>https://forem.com/muthandir/on-communication-2ke4</link>
      <guid>https://forem.com/muthandir/on-communication-2ke4</guid>
      <description>&lt;p&gt;“Ineffective communication is the primary contributor to project failure &lt;strong&gt;one third of the time&lt;/strong&gt; and had a negative impact on project success more than &lt;strong&gt;half the time&lt;/strong&gt;.” - &lt;em&gt;PMI, Project Management Institute&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Communication in a projects happens in different ways (sync: phone calls, meetings or async: email, chat (I think chat is async but that's debatable (not technically, but hypothetically - wow, that's the 3rd nested bracket))). But I believe communication is more than just participating in a conversation over a medium. Imagine a continuous deployment flow where committed source codes are deployed to QA servers every 30 minutes. If a simple Slack integration generates a notification with commit messages every time a deployment is successfully completed, then a QA engineer knows exactly when and what to test at any given time. Now remove the QA engineer and imagine an automated test suite receiving a similar message and immediately starting test execution. This makes me think that a well-established communication framework creates an information network and bridges the gap between stakeholders. Note that the nodes (or endpoints) in such a communication network can be humans and/or machines.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Event Driven Architecture</title>
      <dc:creator>muthandir</dc:creator>
      <pubDate>Tue, 29 Mar 2022 13:57:59 +0000</pubDate>
      <link>https://forem.com/muthandir/event-driven-architecture-3pcp</link>
      <guid>https://forem.com/muthandir/event-driven-architecture-3pcp</guid>
      <description>&lt;p&gt;When we create a software, we try to satisfy most (if not all) requirements that are critical to the business. But we all know there will always be new requirements (sometimes a very surprising one) that will make us scratch our head in order to implement. It doesn’t really matter whether we have a big monolith or microservices, we don’t want to clog up our application servers with actions that are not crucial to the core business activity. For example, if you click a button to book a flight, there are basic things the system must do at the transaction time. To name a few:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Charge the credit card (requires integration with a financial institution)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make the actual booking (requires integration with a travel fulfiller)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create transactional records (for reporting, invoicing etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send an email to the customer&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I will keep the list short for the sake of simplicity, but you can guess how complicated it may get. From a business perspective, you will have to decide what a booking means in the most granular term for your business or what makes it incomplete. There is no right or wrong answer here but there is a common ground to which you will come closer. Most people would say #1 is crucial because this is the way business makes money. #2 is crucial as this is how the customer will board the airplane. #3 is crucial because this is the backbone of your system for bookkeeping, issue management etc. You may argue that #4 is also important but hey, maybe the email could drop in customer’s inbox a few minutes later.&lt;/p&gt;

&lt;p&gt;Ok, so how will we implement it? I’ll break down below different parts of such a system in an event driven architecture using AWS services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Message publisher(s): your applications will produce events (like the booking above) and publish the event messages. This is about notifying the system that something interesting did just happen. In our case we’ve made a custom NPM package that publishes simple AWS SNS messages for all (or filtered) http requests that our server replies. This way any incoming request (any user actions) gets the ability to produce an event in the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Message broker(s): The broker filters and routes the messages to the appropriate listeners (or subscribers). AWS SNS is one of the most straightforward answers in the Amazon world to build an application that reacts to high throughput and low latency events published by other applications. With SNS, you can route your messages to all subscribers (including Amazon SQS queues, AWS Lambda functions, HTTPS endpoints, and Amazon Kinesis Data Firehose) which would create a basic fan-out implementation. Alternatively, you can also do topic or attribute-based filtering for routing your messages to specific subscribers. I know that it sounds very tempting, however filtering your messages (using topics or attribute policies) can lead to very complicated rules that are hard to maintain in a real-world scenario. Plus, I don’t want to change a property in my infrastructure every time I need to make a change in event processing requirements. In most of the cases, I tend to do a fanout and inspect the messages in the workers using a NPM library that I built to filter-out the messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recipient(s): For easy throttling and delivery reliability we added AWS SQS in-between our worker applications and SNS. In other words, SNS sends the event messages to SQS queues, and the worker apps listen to the SQS messages for event processing. This also helps with scaling because SQS is ultimately scalable and if you need to process more messages per second, all you need to do is to fire up another worker server and let it fetch messages from SQS.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In such a system, you may easily get lost trying to trace a transaction going back and forth between layers, so you will want to have a neat logging and tracing ability. You may find more information about logging in &lt;a href="https://dev.to/muthandir/application-logging-and-production-monitoring-4h26"&gt;this&lt;/a&gt; post.&lt;/p&gt;

&lt;p&gt;In the example above, there are still a few things you will want to do after the time of transaction:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Check the airline system (approximately) 5 minutes after the time of purchase (In some cases airlines systems create important information within first few minutes after you make a booking (like a free-upgrade, free lounge for corporate clients, last ticketing date and so on))&lt;/li&gt;
&lt;li&gt; Send a reminder email to the customer prior the flight (generally 24h before)&lt;/li&gt;
&lt;li&gt; Let’s add some more fun and imagine that this is a corporate purchase and the client wants bi-weekly invoices with the total amount of transactions that occurs in 2-week time frame.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These requirements go in the territory of deferred and batch executions, which I explain in the next post.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>node</category>
      <category>eventdriven</category>
      <category>aws</category>
    </item>
    <item>
      <title>Trying to create an onboarding document for junior devs. What are the must-have items?</title>
      <dc:creator>muthandir</dc:creator>
      <pubDate>Fri, 25 Mar 2022 20:30:16 +0000</pubDate>
      <link>https://forem.com/muthandir/trying-to-create-an-orientation-document-for-junior-devs-what-are-the-must-have-items-34ag</link>
      <guid>https://forem.com/muthandir/trying-to-create-an-orientation-document-for-junior-devs-what-are-the-must-have-items-34ag</guid>
      <description>&lt;p&gt;Here are my 2 cents: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First of all I think the onboarding period is often too short. I observed that 3-4 weeks of preparations are of most help to new employees (with no or little professional experience)&lt;/li&gt;
&lt;li&gt;Onboarding activities should be well documented.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are several real-world examples I created for a node.js backend developer that takes ± a month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Development methodology&lt;/li&gt;
&lt;li&gt;Programming Paradigms (Event loop, Promise chaining (for legacy codes), async-await, functional programming, ES6, exception handling)&lt;/li&gt;
&lt;li&gt;Most frequently used libraries (mocha, sequelize, express, async etc.)&lt;/li&gt;
&lt;li&gt;Version Control System (Git)&lt;/li&gt;
&lt;li&gt;Tools (VS Code, ESLint, debugging)&lt;/li&gt;
&lt;li&gt;In house libraries&lt;/li&gt;
&lt;li&gt;Actual Code Structure&lt;/li&gt;
&lt;li&gt;A task in the sprint (TBD later on)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>hire</category>
      <category>onboarding</category>
    </item>
    <item>
      <title>How do you handle the data ownership of a multi-tenant product?</title>
      <dc:creator>muthandir</dc:creator>
      <pubDate>Thu, 24 Mar 2022 18:23:57 +0000</pubDate>
      <link>https://forem.com/muthandir/how-do-you-handle-the-data-ownership-of-a-multi-tenant-product-4g4i</link>
      <guid>https://forem.com/muthandir/how-do-you-handle-the-data-ownership-of-a-multi-tenant-product-4g4i</guid>
      <description>&lt;p&gt;Multi tenancy is very crucial for most of the SAAS products and I think, data ownership is one of the most important aspects of multi tenancy. So how do you deal with the data ownership? &lt;/p&gt;

&lt;p&gt;(Note: by "data ownership", I mean:&lt;br&gt;
-customer1 can only read customer1's data (and can never access customer2's data) or&lt;br&gt;
-customer1 can only upsert data for customer1.&lt;/p&gt;

&lt;p&gt;or in an enterprise situation,&lt;br&gt;
-user1 of customer1 can only read customer1's data (and can never access customer2's data) or&lt;br&gt;
-user1 of customer1 can only upsert data for customer1&lt;/p&gt;

</description>
      <category>saas</category>
    </item>
    <item>
      <title>Application Logging and Production Monitoring</title>
      <dc:creator>muthandir</dc:creator>
      <pubDate>Wed, 23 Mar 2022 18:51:12 +0000</pubDate>
      <link>https://forem.com/muthandir/application-logging-and-production-monitoring-4h26</link>
      <guid>https://forem.com/muthandir/application-logging-and-production-monitoring-4h26</guid>
      <description>&lt;p&gt;In my old days, I used to work in the corporate world as a developer, tech lead, architect etc. Back in those days I rarely worried about how we should do logging &amp;amp; monitoring. We always had tools, means and ways to get end 2 end visibility.&lt;/p&gt;

&lt;p&gt;Later on, I co-founded a startup and my partner and I had to pick our tech stack. Me being a .net guy forever and him being a laravel pro, we went on with node.js 🙂 (For several reasons, but that is another story).&lt;/p&gt;

&lt;p&gt;Back to logging, what we needed was the ability to save the entire lifetime of an incoming request. This means the request body/header info, service layer calls and respective responses, DB calls and so on. Additionally we wanted to use microservices back then (Again, another story with lots of pros and cons). So the entire lifetime also includes the communication between the microservices back and forth. So we needed a request id, and with it we could filter the logs and sort by time. Let me break it down to separate steps:&lt;/p&gt;

&lt;p&gt;UI: We use a SPA on our front-end. The UI makes HTTPs calls to our API.&lt;/p&gt;

&lt;p&gt;API Layer: Our business services in the APIs are instantiated using Factories which inject the dependencies. So in theory you could create a custom logger, enrich it with “request-id” and inject the logger to the business services for the use of developers, so they can log whenever they need so. But it feels like logging is not something we could leave up to our preferences. What we needed was an automated way to flush data. Additionally, the logs also reduce the readability and they could potentially cause bugs. (In theory, a business logic code should not be “polluted” with extra logging codes). To accomplish the task, our factories, instead of injecting the logger into the services, wrap the service functions with a self-logging capability (using an in-house logging library) which simply adds another layer of Javascript promise to capture the input parameters and resolve the response objects. This way, all input and return values are available in the in-house logging library for enriching (method name, function start/end time, server ip, microservice name, elapsed duration etc) and logging. We, as the developers, don’t have to worry about it and know that the system will capture everything that is needed in a well-formatted fashion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sxGYsa2F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wluf00wgsfw7ycarv0cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sxGYsa2F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wluf00wgsfw7ycarv0cm.png" alt="Flat file logging vs searchable logging" width="840" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microservice Communication: We created another in-house library, a forked version of “Request Promise Native”. It helps our developers with injecting out of band request-id info so the target microservice can read and use it throughout the lifetime of its underlying services. This means, all our microservices have the capability to read the incoming request-ids and forward it to outgoing microservice calls.&lt;/p&gt;

&lt;p&gt;Logger: A word of caution, please mask your messages and don’t log any sensitive data! I’ve seen logs with PII or credit card info in the past, please don’t do it. Your users depend on you and this is your responsibility! Anyways, there are tons of good logging libraries out there. We decided to use Winston because,&lt;br&gt;
1-Winston is good&lt;br&gt;
2-It has Graylog2 support, which brings us to our next item:&lt;/p&gt;

&lt;p&gt;Log Repository: In the last 10 years or so, I don’t remember a single case when I had to check the server log files for monitoring/debugging purposes. It is just so impractical to walk through those files with a line of log after the other all coming from different requests. It simply won’t help and actually in one of the US banks that I used to work at, the Devops folks suggested that we could simply stop creating them. Of course, that doesn’t mean you could stop logging. ‘Au contraire!’, it is very important that you have a log repository where you can search, filter, export and manage your logs. So we reduced our options to the following tools:&lt;br&gt;
-Splunk&lt;br&gt;
-Graylog&lt;br&gt;
We selected Graylog because we had experience administrating a Graylog server, it is an open-source tool (meaning much lower costs as it just needs a mid-sized server) and it does the job.&lt;/p&gt;

&lt;p&gt;Your logs will show you lots of insights about your application and will potentially help you to uncover bugs. My team regularly walk-through the logs before each release to understand if we are about to introduce any new unexpected errors. With a tool like Graylog, you can create alerts for different scenarios (http response codes, app error codes etc) and this way you will know there is a problem even before the customer sees the error message. Your QA team can insert request-ids in the tickets so the developers can trace what exactly happened at the time of test. If you want to dive deeper, I remember using Splunk logs for fraudulent behavior mitigation through near-real-time and batch analysis. For whatever reason we use the logs, we want them, embrace them, love them:)&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>graylog</category>
      <category>architecture</category>
      <category>node</category>
    </item>
  </channel>
</rss>
