<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vladyslav Usenko</title>
    <description>The latest articles on Forem by Vladyslav Usenko (@vladusenko48).</description>
    <link>https://forem.com/vladusenko48</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vladusenko48"/>
    <language>en</language>
    <item>
      <title>Using Little’s Law to estimate IP capacity in VPC for AWS Lambda</title>
      <dc:creator>Vladyslav Usenko</dc:creator>
      <pubDate>Sat, 02 Mar 2019 13:01:01 +0000</pubDate>
      <link>https://forem.com/vladusenko48/using-littles-law-to-estimate-ip-capacity-in-vpc-for-aws-lambda-jpn</link>
      <guid>https://forem.com/vladusenko48/using-littles-law-to-estimate-ip-capacity-in-vpc-for-aws-lambda-jpn</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jVyp4tqc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AWJDpVCgj4hBfVXXB" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jVyp4tqc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AWJDpVCgj4hBfVXXB" alt="Network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally posted &lt;a href="https://medium.com/@vu4848/using-littles-law-to-estimate-ip-capacity-in-vpc-for-aws-lambda-46711eb55fe3"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’ll be a tough task to find somebody in 2019, who hasn’t heard about serverless computing. AWS Lambda, GCF and Azure Functions — we all know what these things mean. Cost saving infrastructures, infinite scalability, no operations, no sys admins — how cool is that? We read a lot of case studies by large companies, which claim that they managed to reduce their cloud computing costs X times simply by switching to AWS Lambda or any other cloud market FaaS solution. But, as we know, there are no silver bullets in software engineering world, there are only suitable per particular problem trade-offs.&lt;/p&gt;

&lt;p&gt;Let’s consider AWS, for example. While we are truly abstracted away from any server administration with Lambdas, we feel great. But once we realize that for some particular reason we might need to put our function in a VPC (e.g. lambdas need to talk to RDS, ElastiCache or simply anything without a public IP/DNS name), with the first cold start we realize that we’re screwed. Yeah, it is a problem to start a Lambda in a VPC. I won’t go far in details, you can read a lot about it VPC cold start problems on AWS official &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html"&gt;docs&lt;/a&gt;. Long story short, functions have to be associated with ENI to be able to talk to instances in VPC, and ENI allocation can take up to 10 seconds, which means that cold start of any function within a VPC will most likely take more than 10 seconds! How cool is that? Not really.&lt;/p&gt;

&lt;p&gt;Anyhow, VPC cold start is not the only problem we should think about. This problem affects performance and latency of our apps, but there’s something more behind it. If we put our Lambdas in VPCs, we have to keep in mind IP capacity of its subnets. Details can be found &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/vpc.html"&gt;here&lt;/a&gt;, but simply speaking, it is possible for a VPC to run out of free IP addresses in case of very high number of concurrent requests coming in. For this particular reason, AWS provides us with this famous and simple formula to calculate ENI capacity:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ENI Capacity = Projected peak concurrent executions * (Lambda RAM in GB / 3GB)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Given a CIDR block (e.g. 10.0.0.0/24) and RAM, that we allocate for a Lambda function, we can estimate, how many concurrent requests the subnet will be able to handle, which can be very useful, when we design our subnets and distribute IP addresses.&lt;/p&gt;

&lt;p&gt;However, I tried to go one step forward and tried to make use of Little’s law for distributed systems. This law provides us with such formula:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OB95uYwB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AenQvkMWpUl1Gpil_j0wpyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OB95uYwB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AenQvkMWpUl1Gpil_j0wpyw.png" alt="Little's Law"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we take lambda symbol as an amount of requests per second and W as an average time for each request to complete, L will stand for the mean number of concurrent requests that we’re going to have within a system in any moment of time. Let’s have an example of a shop. Let’s say that people arrive in the shop at the rate of 10 per hour and spend there half an hour. So the average number of people that we’re gonna have in the shop in any moment of time will be equal to 10 x 0.5 = 5. Very simple.&lt;/p&gt;

&lt;p&gt;Little’s law is widely used in estimation of distributed systems’ throughput. However, it gives us a mean value, which is not actually suitable for precise estimation, because it doesn’t consider real world side effects. System’s latency (W) tends to change over time, due to resource allocations, network problems or simply anything else. Mark Brooker (lead engineer in AWS Lambda team) has a nice &lt;a href="http://brooker.co.za/blog/2018/06/20/littles-law.html"&gt;blog post&lt;/a&gt; about it, where he describes the way we can make use of Little’s law closer to the real world.&lt;/p&gt;

&lt;p&gt;However, even though the law doesn’t give us a precise estimation, we are still able to make use of it for some cases. For example, we actually can easily calculate the amount of IP addresses in a VPC, so that it will be able to handle a required requests per second rate.&lt;/p&gt;

&lt;p&gt;Let’s say that there is requirement for a system, built on top of Lambda in a VPC, to be able to handle 1000 requests per second. We need to make sure that we have enough of IP addresses in our subnets to fulfill the requirement. So let’s do some simple math!&lt;/p&gt;

&lt;p&gt;First of all, we need to find out how much time the code in lambda requires to run and how much RAM it consumes, so that we can set a proper amount of memory for our function. It is easy to do that with CloudWatch logs, where AWS provides us with duration in milliseconds and used memory in megabytes. For example, we found out that average duration and memory consumption are 500ms and 128 MBs accordingly. Keeping in mind the VPC cold start we can easily add 10 seconds to the duration and the result will be W in Little’s law. Now we can calculate L:&lt;/p&gt;

&lt;p&gt;L = 1000 (req/s) * (10 + 0.5) (s) &lt;br&gt;
L = 10500 (req)&lt;/p&gt;

&lt;p&gt;This number shows us that during VPC cold start we’re going to have at least 10500 concurrent lambda functions running. Now we need to calculate the necessary amount of IP addresses. In the formula, which AWS provided us to calculate ENI capacity, we can substitute L as projected peak concurrent executions and 128 MB (0.125 GB) as Lambda RAM:&lt;/p&gt;

&lt;p&gt;ENI = 10500 * (0.125 / 3)&lt;br&gt;
ENI = 4375&lt;/p&gt;

&lt;p&gt;As we can see, 4375 IP addresses will be enough to make sure that we’re good. The closest CIDR block to fulfill this requirement will be 10.0.0.0/19, which is 8192 (8187 actually, since AWS claims to reserve 5 addresses for internal needs). 10.0.0.0/20 will give us 4096 (4091), which a little less than we need.&lt;/p&gt;

&lt;p&gt;Still, it’s not quite precise. In most cases Lambdas are in a VPC, because they need to connect to, let’s say, RDS. This means that they are going to open database connections and it will surely downgrade database performance a lot, which will reflect on W in Little’s law. You may find the mentioned Mark Boozer’s blog post very useful, where he goes beyond raw Little’s law.&lt;/p&gt;
&lt;h2&gt;
  
  
  Important note
&lt;/h2&gt;

&lt;p&gt;Of course I have to say that during Re:Invent 2018 AWS announced a solution for VPC cold starts with some fancy remote NATs. Most likely in close future we won’t need to calculate IP addresses and have these 10 seconds delays anymore, which is awesome.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
      &lt;div class="ltag__twitter-tweet__media"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rD-8zlWV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/DtNEPvXV4AAjXCk.jpg" alt="unknown tweet media content"&gt;
      &lt;/div&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--WdGZLqXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/979357781236076544/mwy1NvSV_normal.jpg" alt="Jeremy Daly @ re:Invent profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Jeremy Daly @ re:Invent
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="comment-mentioned-user" href="https://dev.to/jeremy_daly"&gt;@jeremy_daly&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Looks like &lt;a href="https://twitter.com/awscloud"&gt;@awscloud&lt;/a&gt; has some ideas to fix &lt;a href="https://twitter.com/hashtag/Lambda"&gt;#Lambda&lt;/a&gt; cold starts in a &lt;a href="https://twitter.com/hashtag/VPC"&gt;#VPC&lt;/a&gt;. 🙌 Coming in 2019! &lt;a href="https://twitter.com/hashtag/serverless"&gt;#serverless&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/reInvent"&gt;#reInvent&lt;/a&gt; 
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      22:36 PM - 29 Nov 2018
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1068272580556087296" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1068272580556087296" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      60
      &lt;a href="https://twitter.com/intent/like?tweet_id=1068272580556087296" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      146
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;But I just found it interesting enough to write about. Thanks for your attention!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>vpc</category>
    </item>
    <item>
      <title>Serverless pain</title>
      <dc:creator>Vladyslav Usenko</dc:creator>
      <pubDate>Tue, 07 Aug 2018 16:38:47 +0000</pubDate>
      <link>https://forem.com/vladusenko48/serverless-pain-1j67</link>
      <guid>https://forem.com/vladusenko48/serverless-pain-1j67</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fnwxhgsirvw31elanugpb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fnwxhgsirvw31elanugpb.png" alt="Lambda"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Originally posted &lt;a href="https://medium.com/@vu4848/serverless-pain-ab5547d6b122" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A couple words about me and the project. I’m a software developer from Ukraine, mainly working with JavaScript based projects, but open to any other technology. For the last 4 months I’ve been working on my own with a serverless project, using only AWS solutions. The goal was to develop an application with API Gateway, DynamoDB and Lambdas. The business needed an MVP, which every developer can interpret as “we need to have a good solution even sooner than usual”. Okay.&lt;/p&gt;

&lt;p&gt;The stack was chosen before I joined the project. From what I understood, the technical person on customer side decided that using listed above technologies will speed up development process a lot. Everyone heard about serverless before. Titles like “tired of infrastructure management? Join serverless!” or “forget about scaling — put all your efforts in the code!” pop here and there on medium and twitter. Some people even think that serverless is the future of backend development and classic servers are literally dying. Some are right, some are wrong, but for these four months I accumulated a couple thoughts about expedience or practicability of the main buzz word of the last years. I really want to share them, so that you guys will have more stuff to take in consideration before you make a decision of whether to join serverless or not.&lt;/p&gt;

&lt;p&gt;So let’s begin…&lt;/p&gt;

&lt;h2&gt;
  
  
  API Gateway, Lambda, DynamoDB, what’s that?
&lt;/h2&gt;

&lt;p&gt;I won’t spend much time with explanation of AWS products, but I’ll leave a couple words.&lt;/p&gt;

&lt;p&gt;API Gateway can be treated as a router for your application. It’s simple as that — you create an endpoint (they call it a resource), associate an HTTP verb either via CLI or Console UI, attach either an AWS Service or Lambda or static response built with VTL or a proxy destination to it. In my case I only attached Lambdas or services like DynamoDB, VTL allows you to cook HTTP request body into readable by any AWS service request. AWS team did a great job here 👏.&lt;/p&gt;

&lt;p&gt;Lambda is a tiny container (you can treat it as a tiny server), in which you put your code. With Node.js based lambdas you put there a file, which contains a function, that accepts specified by AWS docs arguments. More about that you can read here. Lambda is the place where you’ll have your business logic implemented.&lt;/p&gt;

&lt;p&gt;DynamoDB is an infinitely scalable NoSQL database. At least they say so. It is true that you’ll work with JSON, it is true that it scales very well and on its own. Infinite scalability is awesome, but it left some significant downsides on this database too. Everything has a price. &lt;strong&gt;I want to say that you really NEED to read the docs and to understand how DynamoDB works BEFORE you start designing your schemas.&lt;/strong&gt; DynamoDB’s pros and cons are actually a great topic for another article.&lt;/p&gt;

&lt;p&gt;Now it’s definitely not everything about these solutions, but I tried to cover the most essential for developer parts.&lt;/p&gt;

&lt;p&gt;What do they always say? “Forget about scaling and infrastructure, think about the code”, right? &lt;strong&gt;I want to ask you to keep that in mind for the next paragraphs.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Business logic problem
&lt;/h2&gt;

&lt;p&gt;The backend was not that big. On my API Gateway setup I had approximately 50 unique endpoints, which meant that I had ~40 unique lambdas. Some of the logic was delegated to the VTL + DynamoDB combination. Some functions were used for several endpoints, but with if/else or switch/case statements inside.&lt;/p&gt;

&lt;p&gt;Sounds ok, but the problems began to arise with the thing that serverless, as they say, let you concentrate on — the code you write. The first downside of this function-per-endpoint approach came up very soon. Lambdas are tiny containers, as I mentioned before, therefore you can treat them as separate servers. Separate servers do not share random access memory. Because of that you can not build any service oriented architecture… easily.&lt;/p&gt;

&lt;p&gt;Let’s say you want to have some service class that contains some important and very often used business logic, so that you follow the DRY principle. On classic Node server you’d build a class or just a function and imported/required it everywhere you need it. This is great, because if this logic changes, you don’t need to update it in every place this logic is used. Now imagine if you want to use the same logic in two, four, ten lambdas. Since these functions are actually separate servers, the only way to achieve the goal is… to repeat yourself. Exactly, just copy and paste the code — no other way.&lt;/p&gt;

&lt;p&gt;So what to do? One of the options is to build a bash script that puts something shared in each lambda it should be used with. This is a pretty good option and it will work for you, but everything has it’s price. Now you’ll have to be 100% sure that the script is fine, it’s not broken and each lambda will receive the most recent version of something that you want to be shared. This script simply can’t be simple and easy to make, because this is something you want to rely on, therefore you’ll have to spend some significant amount of time on it. I just didn’t have that time. And it’s not always possible to explain your customer that you spent a day or two and nothing new from business point of view was built.&lt;/p&gt;

&lt;p&gt;There’s another option, which I actually like, but it’s not always possible too. You can set up a private npm registry and use it to be able to require shared stuff from there. But for this you gotta pay to npm and, again, business is not always happy (especially startups) to pay for stuff that could be avoided or built by you.&lt;/p&gt;

&lt;p&gt;Long story short, shared business logic is something that we rely on and use every day in our job, and unfortunately it becomes a problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But hey, you have infinite scalability!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Simultaneous deployment problem
&lt;/h2&gt;

&lt;p&gt;Imagine that you have 4 lambda functions, attached to 4 different endpoints, and they share some piece of business logic, which I mentioned above. Let’s say this piece of business logic needs to be changed and since there are 4 lambda functions using it, there are 4 lambda functions to be redeployed. Obviously, they can’t be deployed simultaneously, exactly at the same time. This means that in production environment you can end up with some lambdas that use out of date logic, which can be really dangerous, you can have corrupted data in tour database and so on.&lt;/p&gt;

&lt;p&gt;This problem is definitely not something new to the world of software development, but still, even with the simplest CRUD backend, built purely with API Gateway and Lambdas, you will definitely end up in need for some solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But hey, you still have infinite scalability!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The cold start problem
&lt;/h2&gt;

&lt;p&gt;With lambdas you pay only for the working time of the function. When the function is idle — you do not pay for it. That’s cool.&lt;/p&gt;

&lt;p&gt;Lambdas are containers. Containers require some time to bootstrap and to become available, unfortunately it’s not fast at all, from my experience it’s at least a couple seconds. So what does it mean? Let’s say your lambda function average working time is 500ms. If your function was idle for some time, its container will be shut down by AWS. The next time your function is invoked, AWS will spin up the container and it’ll add a couple seconds to the response time for your endpoint. Literally your average response time can jump between 300ms to 10 seconds sometimes! Not that user friendly, right?&lt;/p&gt;

&lt;p&gt;How do people solve this cold start problem? They keep lambdas warm! So they have a cron job somewhere that triggers their lambdas from time to time, to prevent AWS from shutting down the container.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1023994078873047040-761" src="https://platform.twitter.com/embed/Tweet.html?id=1023994078873047040"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1023994078873047040-761');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1023994078873047040&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Hmm, doesn’t it sound like you actually end up thinking about the underlying infrastructure?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But hey, you still have …&lt;/strong&gt; Ok, you get that.&lt;/p&gt;

&lt;p&gt;So having all these problems in mind, &lt;strong&gt;would you still chose infinite scalability&lt;/strong&gt;? Well, there’s no right answer on this question, because it depends only on your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other stuff
&lt;/h2&gt;

&lt;p&gt;Now these are not problems, but just some stuff that I have to mention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can’t do WebSockets with lambdas&lt;/strong&gt;, because they have a timeout and the protocol requires persistent up time.&lt;/p&gt;

&lt;p&gt;One of the components of the application I built required a real time messaging option. For this I setup ECS cluster and deployed Docker containers there. Worked just fine.&lt;/p&gt;

&lt;p&gt;Also it was not obvious from the very beginning on how to achieve &lt;strong&gt;staging and versioning&lt;/strong&gt; with serverless backend. How to distinguish between prod, dev and test functions? Luckily API Gateway has concept of stages, which I made great use of. Lambdas also have a thing called aliases, where you basically make a snapshot of the function and give it a name. Had to build a couple of bash scripts to automate stuff and it also worked fine for me. Staging of lambdas might be a pretty good topic for another article too.&lt;/p&gt;

&lt;h2&gt;
  
  
  To sum up…
&lt;/h2&gt;

&lt;p&gt;Serverless may sound as a silver bullet for all your scalability problems. Like literally you just write code and that’s all you’ll ever do! No, that’s not true at all. There’s still infrastructure and problems to handle and take care of, they’re just different.&lt;/p&gt;

&lt;p&gt;In my opinion, the next time I have to deal with serverless backend from scratch, I’ll think twice and also will take in consideration products like &lt;a href="https://github.com/apex/up" rel="noopener noreferrer"&gt;&lt;strong&gt;Up&lt;/strong&gt;&lt;/a&gt; made by TJ or &lt;a href="https://serverless.com/%5C" rel="noopener noreferrer"&gt;&lt;strong&gt;Serverless&lt;/strong&gt;&lt;/a&gt; framework. They actually make a great use of API Gateway and Lambda functions, I would even call them &lt;strong&gt;the real game changers.&lt;/strong&gt; The code you write using them actually looks like a classic monolithic backend, but with lower cost, cause it runs on demand.&lt;/p&gt;

&lt;p&gt;I’m not saying serverless is bad. The goal of this article was to just give you some more stuff to marinade in your minds before you join serverless too.&lt;/p&gt;

&lt;p&gt;Because in theory, theory and practice are the same, but there’s no magic in this world, everything has its price.&lt;/p&gt;

&lt;p&gt;P.S. Also about the price and cost you can read &lt;a href="https://medium.com/@amiram_26122/the-hidden-costs-of-serverless-6ced7844780b" rel="noopener noreferrer"&gt;this great article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;P.P.S If you like this post, please give it a couple claps on &lt;a href="https://medium.com/@vu4848/serverless-pain-ab5547d6b122" rel="noopener noreferrer"&gt;medium&lt;/a&gt;, I would appreciate that a lot &amp;lt;3&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>apigateway</category>
      <category>lambda</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
