<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aastikta Sharma</title>
    <description>The latest articles on Forem by Aastikta Sharma (@aastikta).</description>
    <link>https://forem.com/aastikta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aastikta"/>
    <language>en</language>
    <item>
      <title>Design Paradigms of a Database Service: High Availability and Fault Tolerance</title>
      <dc:creator>Aastikta Sharma</dc:creator>
      <pubDate>Wed, 09 Sep 2020 15:48:32 +0000</pubDate>
      <link>https://forem.com/aastikta/design-paradigms-of-a-database-service-high-availability-and-fault-tolerance-6pd</link>
      <guid>https://forem.com/aastikta/design-paradigms-of-a-database-service-high-availability-and-fault-tolerance-6pd</guid>
      <description>&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;Designing a highly available and fault tolerant database can be one of the most challenging tasks for any service. There is never a “one size fits all” approach to achieving these reliably and often times, based on organizational and business needs, one of these is prioritized over the other. But keeping a fine balance between the two of them can prevent disastrous fail overs and complete loss of data. This article focuses on clearly defining the two paradigms and understanding the basics of various techniques &amp;amp; principles that are helpful in achieving them.&lt;/p&gt;

&lt;h2&gt;
  
  
  HIGH AVAILABILITY
&lt;/h2&gt;

&lt;p&gt;It is the ability of the system/service to continue providing services and minimizes any down time. For designing highly available database service, some of the following key principles are kept in mind:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Single Point of Failures: This can be achieved by adding redundancy and prevent failure of the entire service if one of the parts of the system fails. Creating a failover service or a standby is super helpful to avoid single point of failures. In case of a failure, the standy service can start taking the traffic. It can be achieved by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hot failovers: In this case, all the servers (primary and backup) are running simultaneously but the traffic is routed to only one server at a time. In case of failure, the traffic get directed the backup server.&lt;/li&gt;
&lt;li&gt;Cold failovers: In this case the backup server starts after the primary server is completely shut down.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clustering: For a highly available database service, clustering helps in calling resources from other services within a cluster in case of a failure. A database cluster will include several nodes which communicate with each other and during a failure in one of the nodes, rest of the cluster can operate normally. The cluster can continue to operate while the fault node is getting recovered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load Balancing: This is an important principle when you want to achieve a highly available database service because during a failure, a load balancer is the one that detects the failed server and redirects the traffic to the healthy ones. Apart from high availability, a load balancer also provides incremental stability to the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redundancy: Geographic redundancy is very important for a database service and helps prevent outages &amp;amp; loss of data due to natural disasters.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAULT TOLERANCE
&lt;/h2&gt;

&lt;p&gt;It is the ability of any service/system to continue operating inspite of failures due to one of it’s own components. For designing fault tolerant database system, a couple of techniques should be applied in different categories such as replication, failure detection, throttling, etc.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Data Replication: In order to maintain high durability of data, storing multiple copies of data is preferred. Some of the popular ways to replicate the data are as follows (let’s assume there are N replicas for the database):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Synchronous Replication: When a client sends a write request to the database, it starts writing synchronously to all the N replicas one by one even before acknowledging the client request. The leader gets all the requests and starts writing them in order and then replicating the data on the followers.&lt;/li&gt;
&lt;li&gt;Paxos based replication: It is very similar to the synchronous replication but this kind of replication requires communication with the majority nodes only.&lt;/li&gt;
&lt;li&gt;Leader Follower Replication: It is the widely popular replication methodology popularly used my MySQL. The client writes the data to the leader and then asynchronously, leader writes the data to all of its followers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disaster Recovery: It is the ability to recover from a large scale failures with minimum discontinuation to the services as possible. The key objectives for a good disaster recovery plan circles around Recover Time Objective (RTO) and Recovery Point Objective (RPO). In order to achieve these, there are two major elements of it:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backup: We should always keep copies of important data in order to recover them at the time of disaster recovery. One of the major concepts built on top of this is Point in Time Recovery. It helps in recovering a database from a previously known good point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tolerance: We should also deploy two or more database services that are kept far away from each other and continuously monitor the health status of each other. During a failure, the traffic can be switched over to the other healthy service without interruption.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Despite some of the best intentions, failure scenarios are inevitable and hence, preparing for them and deploying processes in place helps mitigate disastrous scenarios.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"This article was originally published on my personal &lt;a href="https://aastikta.substack.com/welcome"&gt;blog&lt;/a&gt;. Head over there if you like this post and want to read others like it."&lt;/em&gt;&lt;/p&gt;

</description>
      <category>wecoded</category>
      <category>database</category>
      <category>design</category>
      <category>serverless</category>
    </item>
    <item>
      <title>High Level Overview of Load Balancing Algorithms</title>
      <dc:creator>Aastikta Sharma</dc:creator>
      <pubDate>Wed, 26 Aug 2020 21:48:10 +0000</pubDate>
      <link>https://forem.com/aastikta/high-level-overview-of-load-balancing-algorithms-3mfh</link>
      <guid>https://forem.com/aastikta/high-level-overview-of-load-balancing-algorithms-3mfh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Load Balancing is the process of evenly distributing your network load across several servers. It helps in scaling the demand during the peak traffic hours by helping spread the work uniformly. The server can be present in a cloud or a data center or on-premises. It can be either physical server or a virtual one. Some of the main functions of a Load Balancer (LB) are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Routes data efficiently&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prevents server overloading&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performs health checks for the servers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provision new server instances in the face of large traffic&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Types of Load Balancing Algorithms
&lt;/h2&gt;

&lt;p&gt;In the 7 layer OSI model, load balancing occurs from layer 4 (Transport Layer) to 7 (Application Layer). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftvejff05wr6pm603x0oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftvejff05wr6pm603x0oc.png" alt="Alt Text" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The different types of LB algorithms are effective in distributing the network traffic based on how the distribution of traffic looks like i.e. whether it’s a network layer traffic or an application layer traffic. &lt;/p&gt;

&lt;p&gt;The network layer traffic is routed by LB based on TCP port, IP addresses, etc.&lt;/p&gt;

&lt;p&gt;The application layer traffic is routed based on various additional attributes like HTTP header, SSL and even provides content switching capabilities to LBs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Layer Algorithms
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Round Robin Algorithm
&lt;/h3&gt;

&lt;p&gt;The traffic load is distributed to first available server and then that server is pushed down into the queue. If the servers are identical and there are no persistent connections, this algorithm can prove effective. There are 2 major types of Round Robin algorithms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Weighted Round Robin: If the servers are not of identical capacity, then this algorithm can be used to distribute load. Some weights or efficiency parameter can be assigned to all the servers in a pool and based on that in a similar cyclic fashion, load is distributed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamic Round Robin: The weights that are assigned to a server to identify it’s capacity can also be calculated on runtime. Dynamic Round Robin helps in sending the requests to a server based on runtime weight.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Least Connections Algorithm
&lt;/h3&gt;

&lt;p&gt;This algorithm calculates the number of active connections per server during a certain time and directs the incoming traffic to the server with least connections. This is super helpful in the scenarios where persistent connection is required. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Weighted Least Connections Algorithm: This is similar to the Least Connections Algorithm above but apart from number of active connections to a server, it also keeps in mind the server capacity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Least Response Time Algorithm: This is again similar to the Least Connections Algorithm but it also considers the response time of servers. The request is sent to the server with least response time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Hashing Algorithm
&lt;/h3&gt;

&lt;p&gt;The different request parameters are used to determine where the request will be sent. The different types of algorithms based on this are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Source/Destination IP Hash &lt;br&gt;
The source and destination IP addresses are hashed together to determine the server that will serve the request. In case of a dropped connection, the same request can be redirected to the same server upon retry. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;URL Hash &lt;br&gt;
The request URL is used for performing hashing and this method helps in reducing duplication of server caches by avoiding storing the same request object in many caches.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Miscellaneous Algorithms
&lt;/h3&gt;

&lt;p&gt;There are a few other algorithms as well which are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Least Bandwidth Algorithm: The server with least consumption of bandwidth in the last 14 minutes is selected by the Load Balancer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Least Packets Algorithm: Similar to above, the server that is transmitting the least number of packets is chosen by the Load Balancer to redirect traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Custom Load Algorithm: Load Balancer selects the server based on the current load on it which can be determined by memory, processing unit usage, response time, number of requests etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Application Layer Algorithms
&lt;/h2&gt;

&lt;p&gt;At this layer traffic can be distributed based on the contents of the request hence, a much informed decision can be made by LBs. The server response can be tracked as well since it has traveled all the way from server and help in determining the server load much effectively. &lt;/p&gt;

&lt;p&gt;One of the most significant algorithm used at this layer is Least Pending Request Algorithm. This algorithm directs the traffic of pending HTTP(s) requests to the most available server. This algorithm is helpful in adjusting the sudden spike in requests by monitoring the server load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These are some of the known load balancing algorithms and while selecting the most desirable algorithm, a number of factors need to be considered eg high traffic, sudden spikes etc. A good selection of algorithm helps in maintaining the reliability and performance of any application. Hence, a good understanding of these will prove helpful while designing large scale distributed systems.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you like the post, share and subscribe to the &lt;a href="https://aastikta.substack.com/p/high-level-overview-of-load-balancing"&gt;newsletter&lt;/a&gt; to stay up to date with tech/product musings.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(The contents of this blog are of my personal opinion and/or self-reading a bunch of articles and in no way influenced by my employer.)&lt;/em&gt;&lt;/p&gt;

</description>
      <category>technology</category>
      <category>design</category>
      <category>architecture</category>
      <category>wecoded</category>
    </item>
    <item>
      <title>Patterns to build robust and highly available APIs</title>
      <dc:creator>Aastikta Sharma</dc:creator>
      <pubDate>Sun, 16 Aug 2020 23:19:40 +0000</pubDate>
      <link>https://forem.com/aastikta/patterns-to-build-robust-and-highly-available-apis-3hhc</link>
      <guid>https://forem.com/aastikta/patterns-to-build-robust-and-highly-available-apis-3hhc</guid>
      <description>&lt;p&gt;API is the building block of any client-server communication by helping exchange information in the form of request-response pattern. In any distributed system, it becomes immensely important to build APIs that are robust in nature and are highly available even in the face of a network issue. This article will summarize a couple of  good practices that helps in developing highly available robust APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idempotency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the face of a network failure, the API design must be expected to provide a response in a consistent way when the system comes back up. This is one of the most common issues in a distributed systems world where a failure either on client or server side leads to a retry on the API operation. In such scenarios, APIs should be built in an idempotent way meaning as many times you call the API with the identical requests, the response will remain the same or in better words, the effect of the request on server will remain same as if only a single request was made. &lt;/p&gt;

&lt;p&gt;This can be achieved by using idempotency keys. During a client-server communication, client will generate a unique key to identify a request and sends it to the server. In case of failure, if client retries the same request, server can reply back with the previously cached result if it has already seen the idempotency key of the operation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exponential Backoff Retry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During a failure, a client can retry the request a couple of times until it gets the response back. Usually with the next retry, the failures such as intermittent network issues are gone. But, if the server is facing much serious issues leading to longer down time, retrying request continuously will lead to further worsening the issue. Hence, clients should usually follow the exponential retry algorithm which is waiting for an initial wait time on the first failure and then increasing the wait time proportionally. Sometimes multiple clients can retry around the same time due to exponential backoff and again add to the load on the server. This can be avoided by adding a random jitter to the request that will space out the various requests to the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate Limiting APIs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Due to traffic spikes, there are times when API requests increases thereby leading to response timeouts or even worse, service outages. You can definitely increase the capacity of your infrastructure to account for user growth but after a certain limit, it’s advisable to scale your APIs to support the unexpected traffic bursts. Deploying safe rate limits to every user’s account can prevent such large scale degradation by controlling the amount of traffic that is sent to your APIs (more like number of requests per second). &lt;/p&gt;

&lt;p&gt;There are various types of rate limiters that can be used according to the kind of traffic any API supports. One of the most common ways to limit your users is by rate limiting them by requests. One of the easiest ways is to analyze the traffic patterns to your API before and during a traffic spike and use that data to limit every user to a certain number of requests per second. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Versioning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;API is a contract between the API developers and users that rely on it to fetch some data. Hence, it becomes super important to make any new changes backward compatible. One of the ways to achieve this is by API versioning. Versioning is a way to enable users to switch to the newer set of changes in an API effectively. Although versioning means a cost on the developers to maintain the old versions as well as enhance them with newer features, it’s one of the proven ways that allows the users to upgrade whenever they want to the latest versions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pagination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes the response from the server can be huge thereby leading to degradation of performance of the API by increased latency. In order to handle such responses gracefully, APIs should return batched response or in other words, paginate the response. &lt;/p&gt;

&lt;p&gt;Pagination can be achieved by using some sort of marker to identify that there is another batch of response associated with a request. The response contains a batch of results and an identifier which can be used to fetch the next batch of result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimistic Concurrency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The users of API can sometimes simultaneously try to make an update to the same resource. In order to successfully achieve such multiple transactions without stepping over each other is called optimistic concurrency. This can be done using a version number on the resource. &lt;/p&gt;

&lt;p&gt;For example: if there are two clients, A &amp;amp; B simultaneously trying to update a resource, R. If A is successfully able to update R and writes it to the database. Then B’s request should get a concurrency error thereby informing B that the version of R has been updated and B should first update it’s copy of resource R. Hence, B will first get the latest version for R and then perform the update on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Any API call uses HTTP (Hyper Text Transfer Protocol) to transfer data over network. HTTP traffic can be read by any one hence, it becomes super important to secure the communication. TLS (Transport Layer Security), or previously known as SSL (Secure Socket Layer) is a way to secure these communications over network by encrypting the request/response. HTTPS is now a widely popular way to securely communicate using HTTP and TLS. &lt;/p&gt;

&lt;p&gt;HTTPS should be used while making any API requests especially the ones that deal with sensitive data. Usually the host providers are able to provide SSL certification, otherwise there are open source certificate authorities that can be used for this. SSL certificate helps validating your server and any requests made from the client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are some of the most crucial patterns to keep in mind when working on building any API. Understanding these concepts thoroughly will definitely save time debugging and even avoiding large scale issues. Failure in adopting them might work in short run but as the usage of the API increases, to avoid any bottlenecks and surprise failures, these patterns will prove really useful.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you like the post, share and subscribe to my newsletter to stay up to date with tech/product musings at &lt;a href="https://aastikta.substack.com/welcome"&gt;Aastikta's Newsletter&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>design</category>
      <category>computerscience</category>
      <category>wecoded</category>
    </item>
    <item>
      <title>How to build a resilient DNS service</title>
      <dc:creator>Aastikta Sharma</dc:creator>
      <pubDate>Sat, 08 Aug 2020 23:45:33 +0000</pubDate>
      <link>https://forem.com/aastikta/how-to-build-a-resilient-dns-service-ce9</link>
      <guid>https://forem.com/aastikta/how-to-build-a-resilient-dns-service-ce9</guid>
      <description>&lt;p&gt;DNS provides an easy, human readable way to map naming for any resources that are connected to internet. You can consider DNS as like a phonebook that stores IP addresses of various domains. In every communication on network, DNS plays a crucial role to lookup the destination IP address and hence, DNS service is one of the most important parts of any network communication.&lt;/p&gt;

&lt;p&gt;In a distributed systems world, there are several times when the network issues baffle us with an unprecedented spike in traffic or increased failure rate of any dependent services. DNS service, being a crucial part of any distributed system, should try to build a safety net around itself to avoid large scale failures during lookup. Before understanding how to debug issues with DNS, let’s learn a bit about various tools which are readily available to help with any issues related to DNS resolution.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;tcpdump: It’s a command line packet analyzer which is helpful to capture any DNS traffic from a particular server. This is super helpful in analyzing what kind of traffic hits your servers during a certain time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dig: It’s another command line tool to query the DNS and understand how many of the queries made to your DNS server actually passed or failed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;iptable: This contains the details around the packets which get routed on the network based on a certain rule. You can create custom rules to redirect network packets and if needed, perform analysis on them to understand the traffic to your service better.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Apart from some of the widely popular tools like above, there are a couple of others which can be used to increase chances of identifying issues with your traffic and make your DNS service more reliable. But before getting into those, let’s take a quick look at what are the various layers of host that a DNS service has so that we can use different tools at different hosts.&lt;/p&gt;

&lt;p&gt;A general DNS service will consist of three layers of hosts for communication which are a caching layer (used to resolve DNS queries recursively), an edge host layer (runs a DNS authority daemon which is used to respond to cache layer queries and sends them to corresponding zones) and authority host (serves as DNS master and is used for CRUD operations on records). &lt;/p&gt;

&lt;p&gt;By moving to modern day infrastructure for every layer, you can increase the chances of making your DNS service more reliable in the face of any unexpected traffic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Unbound: Proper logging is very important to debug any issue and Unbound provides an large set of statistics around the DNs traffic. Unbound is a DNS resolver that also validates and caches the responses. If a DNS infrastructure setup uses Unbound, then getting metrics around the traffic can become a lot convenient. Unbound also provides a list of requests that your DNS server is getting and investigate the traffic pattern at any certain time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NSD: Name Server Daemon(NSD) is used for edge hosts and is suited for top level domain implementations to serve small to high traffics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;PowerDNS: PowerDNS provides a versatile nameserver called PowerDNS Authoritative Server (along with PowerDNS Recursor and dnsdist) that can be used for authority host layer.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the help of modern DNS infrastructure, resiliency and availability of DNS services can be enhanced and plus they provide rich metrics on every layer to help debug failures with DNS service.&lt;/p&gt;

&lt;p&gt;In order to handle critical services like DNS, it’s important to make the shift towards new system. Any such shift is painful to achieve but in the long run, in order to successfully serve large traffic loads, solid infrastructure and detailed metrics to understand the health of the service are important. Hence, an effort should me made to move the services to modern day infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you like the post, share and subscribe to my newsletter on &lt;a href="https://aastikta.substack.com/welcome"&gt;substack&lt;/a&gt; to stay up to date with tech/product musings.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>career</category>
      <category>productivity</category>
      <category>wecoded</category>
    </item>
  </channel>
</rss>
