<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Roman Voloboev</title>
    <description>The latest articles on Forem by Roman Voloboev (@animir).</description>
    <link>https://forem.com/animir</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/animir"/>
    <language>en</language>
    <item>
      <title>Biased: Fixed Window rate limiting algorithm explained</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Sun, 05 Apr 2026 15:46:14 +0000</pubDate>
      <link>https://forem.com/animir/biased-fixed-window-rate-limiting-algorithm-explained-1d91</link>
      <guid>https://forem.com/animir/biased-fixed-window-rate-limiting-algorithm-explained-1d91</guid>
      <description>&lt;p&gt;Fixed Window rate limiting algorithm enforces a limit on the number of events allowed within a time window. "Maximum 5 password attempts in 10 minutes" is a classic rate limiting example.&lt;/p&gt;

&lt;p&gt;The common misconception about Fixed Window is that people describe it as "allows N requests per fixed calendar window" and then warn about the "burst at boundary" problem.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.npmjs.com/package/rate-limiter-flexible" rel="noopener noreferrer"&gt;rate-limiter-flexible&lt;/a&gt; Node.js package, a Flexible Fixed Window Algorithm is implemented. To be honest, I shouldn't call it "flexible," but I don't have much choice. Somehow the developer community has managed to portray the fixed window algorithm as if it were anchored to specific calendar time, as if it starts at 12:00 PM and ends at 12:10 PM. Not necessarily. For every unique client, IP address, or fingerprint, the window start time varies. It begins when the first request arrives, and the counter expires after a specified duration, e.g. 10 minutes. The window start is always variable.&lt;/p&gt;

&lt;p&gt;Many articles describe the Fixed Window Algorithm incorrectly. That's why I deliberately call it "flexible."&lt;/p&gt;

&lt;p&gt;The false belief that gets copied over and over, from one article to another: &lt;em&gt;it leads to bursts at the boundary of windows&lt;/em&gt;. Let's take a closer look.&lt;/p&gt;

&lt;p&gt;If every unique client has its own window start time, then boundaries don't matter. In many services you expect traffic spikes allowed. Here's the deeper explanation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When you use rate-limited services, like AI agents, do you care if you spend your daily allowed tokens at the end of the day and then continue working after midnight, consuming tokens from the next day? Of course you do. It's natural to expect that once a limiter resets, tokens become available. Nobody calls this a boundary problem. On the contrary, you'd be frustrated if that spike on the window boundary weren't allowed.&lt;/li&gt;
&lt;li&gt;Creating a boundary spike is less probable statistically than it seems. To pull it off, a client would have to try 1 password, wait 9 minutes and 59 seconds, try another 4, and then immediately try 5 more. The probability of that event is quite low. That's the magic of a flexible window start.&lt;/li&gt;
&lt;li&gt;Statistically, different clients send requests at different times. They are not in perfect sync. Even if one client manages to create a spike on a window boundary, it isn't an issue: other clients follow different patterns, causing overall requests to scatter across time.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y71o6y4bs9640tzdrhd.png" alt="Fixed time frame starts at different times for users A, B, and C." width="800" height="267"&gt;
&lt;/li&gt;
&lt;li&gt;Sure, a malicious user could control 100 accounts and coordinate attacks on boundaries. But does Token Bucket protect against that? No. The same attacker can simply wait for the bucket to refill and then unleash a burst. Speaking of bursts, before Token Bucket was introduced in the 1980s, Leaky Bucket was the primary pattern for limiting traffic. Its problem was that it didn't allow traffic bursts at all. Token Bucket does. And that's never mentioned as an issue. It's a feature.&lt;/li&gt;
&lt;li&gt;There is one case you should be careful of. If you're limiting requests because of infrastructure constraints and traffic spikes could degrade performance, create two limiters: one for unique clients and one for total traffic per second. This approach keeps your application running under pressure, with some users' experience degraded rather than everyone's. You don't have many options here. You either make the user experience worse by disallowing traffic spikes entirely, or you mitigate the consequences of allowing them. No rate limiting algorithm can win the fight between your infrastructure limitations — limited budget, in fact — and an overwhelming volume of malicious requests.
To manage spikes even more effectively, take a look at &lt;a href="https://github.com/animir/node-rate-limiter-flexible/wiki/BurstyRateLimiter" rel="noopener noreferrer"&gt;BurstyRateLimiter&lt;/a&gt;. It allows spontaneous traffic bursts with finer control.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After years of studying and applying different rate limiting algorithms, I've found that any algorithm can be adapted for specific needs. What I value about the Flexible Fixed Window Algorithm is that it provides clear control over application behavior and the ability to build custom solutions across multiple dimensions of traffic using two or more combined limiters. And it is always predictable in terms of performance.&lt;/p&gt;

&lt;p&gt;Never forget to question the basics. Take control over the information you consume.&lt;br&gt;
Happy coding!&lt;/p&gt;

</description>
      <category>ratelimiting</category>
      <category>webdev</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Perspective: bridging digital and physical realms with atomic counters</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Wed, 29 Oct 2025 09:58:49 +0000</pubDate>
      <link>https://forem.com/animir/bridging-digital-and-physical-realms-with-atomic-counters-1504</link>
      <guid>https://forem.com/animir/bridging-digital-and-physical-realms-with-atomic-counters-1504</guid>
      <description>&lt;h2&gt;
  
  
  Numbers
&lt;/h2&gt;

&lt;p&gt;Numbers have no physical existence. They are abstract. Humanity created numbers to manage the present. It was the first step towards virtuality. We needed to count animals in the herd, days, harvest, and people in the tribe. In time we realized that with numbers we can predict the future.&lt;/p&gt;

&lt;p&gt;By observing the motion of the planets, astronomers have calculated their orbits and can accurately predict eclipses centuries in advance. Meteorologists analyze temperature, pressure, humidity and predict the weather. Insurance companies calculate accident risks, doctors evaluate the effectiveness of treatments, and economists forecast market growth. The more data, the more accurate the forecast. But everything starts from measuring and counting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Counting in everyday life
&lt;/h2&gt;

&lt;p&gt;You wake up in the morning an hour before work starts. One hour is enough to have a calm morning routine, fry eggs and have breakfast. You go to the bathroom, squeeze toothpaste onto your brush. While you are cleaning your teeth roughly estimate how much toothpaste is left - not much. After the bathroom you go to the kitchen. You open notes on your smartphone on the way and add the twentieth item to the shopping list. The list is getting full. More importantly, there are only six eggs left in the fridge - enough for three breakfasts. You plan to go online shopping this evening. Today is Thursday and goods will be delivered on Saturday, in two days. Perfect timing.&lt;/p&gt;

&lt;p&gt;I've just described five minutes of human life. It required measuring and counting six different things. And there were no issues. We are very good at counting. Numbers help us to manage everyday life, make it more predictable and comfortable. The same as it was thousands of years ago.&lt;/p&gt;

&lt;p&gt;Problems begin in the evening when we go to shop online.&lt;/p&gt;

&lt;h2&gt;
  
  
  Counting online
&lt;/h2&gt;

&lt;p&gt;The computer era and then the global network created a need to operate with very big numbers. Numbers still serve us and still make our life more predictable and comfortable, but we don't know them anymore. Computers do calculations for us. We are not able to count at the speed of billions of operations per second. It is beyond our natural abilities. We can't even imagine that process. It is too fast.&lt;/p&gt;

&lt;p&gt;We've moved many everyday activities to online web services, messengers, online shops, and video calls. We get payments in pure numbers that arrive as another record in some database and are displayed in a mobile banking app. We pay numbers online to buy things. In other words, we partly live in virtuality.&lt;/p&gt;

&lt;p&gt;The impression that the global network is huge is not wrong. It is huge. However, one online shop may be served by one real server in a data center. It is a relatively small box, smaller than a single shelf in an offline supermarket. In this box there are virtual parts of us, our motives and activities, and data from thousands of customers. Small box full of microchips and advanced electrical components. No eggs inside.&lt;/p&gt;

&lt;p&gt;In the physical world, there is no way thousands of customers could go near a single shelf. In virtuality, customers are put inside the sophisticated machine. It is a highly concurrent space. Activities of thousands of customers are handled inside. And this is a problem because at the end of the day we eat not numbers but eggs. Every number should represent a real pack of eggs waiting for a customer somewhere in a warehouse. A number can be copied in the computer memory, but the last pack of eggs can't be copied in a warehouse. Only one customer must be able to get the last pack.&lt;/p&gt;

&lt;p&gt;When we go to an offline shop to buy eggs the process is straightforward. There are other customers in the supermarket. They also buy eggs. If you take the last pack of eggs from the shelf and put it in your basket, nobody else will be able to take it from the shelf anymore. The laws of reality naturally handle it.&lt;/p&gt;

&lt;p&gt;Things change online. When thousands of customers shop online there is a big chance that two or three customers could take the same last pack of eggs. Poorly designed virtual shelves allow that mistake. Obviously, in the end only one customer would get the pack of eggs. Others would be frustrated by not receiving eggs during the next delivery.&lt;/p&gt;

&lt;p&gt;This is an example of when counting must be implemented correctly to avoid race conditions. We need a bridge between virtuality and reality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adapter for virtuality
&lt;/h2&gt;

&lt;p&gt;Virtuality is a model of reality. Virtuality consists of a huge amount of numbers. We change numbers, and laws of the model change. In virtuality, we can create as many eggs as we want and store them as a very-very big number that could be larger than the amount of atoms in the observable universe.&lt;/p&gt;

&lt;p&gt;Our virtuality is very convenient because it is fast, flexible and accessible just with a mobile app or browser. The problem begins when virtuality should be connected back to reality. We can't eat numbers, can we? We eat eggs that are limited in the real world.&lt;/p&gt;

&lt;p&gt;This is where we should create a way to connect two worlds. Connect the concurrent and incredibly fast digital realm to limited reality. The solution lies in making our virtual counters work like reality does - one item, one customer, one transaction. We need a proper adapter. Let's start from a broken one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-atomic counters
&lt;/h2&gt;

&lt;p&gt;The number of egg packs in a warehouse is stored in computer memory. When two customers want to put the last pack of eggs in their basket, the computer should verify if the number is not zero, if there is at least one pack of eggs in a warehouse. Online shops can't sell eggs that don't physically exist.&lt;/p&gt;

&lt;p&gt;How to verify eggs exist? Retrieve a number of packs from computer memory, subtract number one that represents a pack of eggs, and set the result back to computer memory. Easy, right? Not quite. Note that there is a time gap between getting a number and setting the changed number back. This is a potential issue.&lt;/p&gt;

&lt;p&gt;There is a chance that another customer will get the same number from computer memory before the first customer's updated number is saved in computer memory. This causes race conditions. Both customers get a pack of virtual eggs represented as numbers, but in the real warehouse there is just one last pack. Unfortunately, one customer will not get eggs. Not a happy end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic counters
&lt;/h2&gt;

&lt;p&gt;There is a simple fix that keeps all customers happy. While the first customer buys eggs, no other customers should be able to change the number in computer memory. The process should be non-divisible - atomic. In reality, one pack of eggs can't be placed in two different baskets at the same time. Virtuality should mirror this law.&lt;/p&gt;

&lt;p&gt;When three customers simultaneously click the "Add to cart" button, the first customer should obtain an exclusive lock on the computer memory that stores the number of eggs, subtract one, set the changed value back to memory, and only after that release the lock. The other two customers must wait.&lt;/p&gt;

&lt;p&gt;In essence, it creates a queue of customers waiting their turn to get exclusive access to the virtual shelf with eggs. This is how atomic counters properly bridge the gap between the digital and physical realms.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>learning</category>
      <category>perspective</category>
    </item>
    <item>
      <title>Token Bucket vs Bursty Rate Limiter</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Sun, 12 Apr 2020 12:54:59 +0000</pubDate>
      <link>https://forem.com/animir/token-bucket-vs-bursty-rate-limiter-a5c</link>
      <guid>https://forem.com/animir/token-bucket-vs-bursty-rate-limiter-a5c</guid>
      <description>&lt;p&gt;This post is created under the impression that there is a wrong opinion on fixed window rate limiting approach.&lt;/p&gt;

&lt;p&gt;As author of &lt;a href="https://www.npmjs.com/package/rate-limiter-flexible" rel="noopener noreferrer"&gt;rate-limiter-flexible&lt;/a&gt; Node.js package I got some experience using different rate limiter approaches in different environments.&lt;/p&gt;

&lt;p&gt;Extended version of this post with description of Token Bucket and Fixed Window rate limiting approach was originally posted on Medium. So if you want more details, read extended vesion.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="https://medium.com/@animirr/fixed-window-rate-limiter-is-slightly-better-than-token-bucket-here-is-why-bc769c0bdd9" class="ltag__link__link" rel="noopener noreferrer"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afill%3A88%3A88%2F0%2AQED9Mzc8QAKbJ7cZ." alt="Roman Voloboev"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://medium.com/@animirr/fixed-window-rate-limiter-is-slightly-better-than-token-bucket-here-is-why-bc769c0bdd9" class="ltag__link__link" rel="noopener noreferrer"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Fixed Window rate limiter is slightly better than Token Bucket. Here is why. | by Roman Voloboev | Medium&lt;/h2&gt;
      &lt;h3&gt;Roman Voloboev ・ &lt;time&gt;Mar 7, 2024&lt;/time&gt; ・ 
      &lt;div class="ltag__link__servicename"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fmedium-f709f79cf29704f9f4c2a83f950b2964e95007a3e311b77f686915c71574fef2.svg" alt="Medium Logo"&gt;
        Medium
      &lt;/div&gt;
    &lt;/h3&gt;
&lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;TL;DR: Fixed Window rate limiter with traffic burst allowance can easily replace Token Bucket and even bring performance boost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main difference
&lt;/h2&gt;

&lt;p&gt;Token Bucket allows traffic burst by nature. If tokens are not taken during some period of time, token bucket is refilled partly or completely. More tokens in a bucket, higher traffic burst allowed.&lt;/p&gt;

&lt;p&gt;Bursty Rate Limiter like from &lt;a href="https://www.npmjs.com/package/rate-limiter-flexible" rel="noopener noreferrer"&gt;rate-limiter-flexible&lt;/a&gt; package actually consists of two Fixed Window limiters: one for constant rate like 2 requests per second and another for traffic burst allowance like 3 requests per 5 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visual traffic comparison
&lt;/h2&gt;

&lt;p&gt;There are two examples on Chart 1 and Chart 2 below. Order and amount of requests are the same for both. Square without background color represents not touched token/point. Green square defines consumed token and allowed action. Red square — rejected action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffn3oa90tya0v0rejrtm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffn3oa90tya0v0rejrtm9.png" alt="Token Bucket with capacity 5 and fill rate 2 tokens per second."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Chart 1: Token Bucket with capacity 5 and fill rate 2 tokens per second.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa6svgplug6wnjxrfmsg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa6svgplug6wnjxrfmsg5.png" alt="BurstyRateLimiter with 2 points per second and burst allowance 3 requests per 5 seconds."&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Chart 2: BurstyRateLimiter with 2 points per second and burst allowance 3 requests per 5 seconds.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You could notice, allowed requests’ shape is not the same, but quite similar. Both allowed traffic bursts.&lt;br&gt;
You could also notice, that BurstyRateLimiter allowed 5 actions in time window 5–6, but Token Bucket allowed 2 only. Token Bucket didn’t have enough tokens by that time. But it could have, if there are no requests for time window 4–5.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarks
&lt;/h2&gt;

&lt;p&gt;Two Node.js packages are benchmarked: &lt;a href="https://www.npmjs.com/package/hyacinth" rel="noopener noreferrer"&gt;hyacinth&lt;/a&gt;’s TokenBucket and rate-limiter-flexible’s BurstyRateLimiter.&lt;/p&gt;

&lt;p&gt;The aim of this benchmark is comparison of two algorithms not providing absolute truth on every of them. Results may be different on different environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxpct22bw8kbq1jii5q7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxpct22bw8kbq1jii5q7y.png" alt="1000 requests per second from 1000 concurrent clients during 30 seconds"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Chart 3: 1000 requests per second from 1000 concurrent clients during 30 seconds.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fud463tbu4bin36qwlvs5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fud463tbu4bin36qwlvs5.png" alt="2000 requests per second from 1000 concurrent clients during 30 seconds"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Chart 4: 2000 requests per second from 1000 concurrent clients during 30 seconds.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Note, Bursty Rate Limiter from rate-limiter-flexible applies &lt;a href="https://github.com/animir/node-rate-limiter-flexible/wiki/Options#inmemoryblockonconsumed" rel="noopener noreferrer"&gt;inmemoryBlockOnConsumed&lt;/a&gt; option. It stores blocked users in process memory until their time window’s end. It speeds up processing requests. This is not only performance boost, but also good protection against massive DDoS attacks, if your application doesn’t have one yet.&lt;/p&gt;

&lt;p&gt;Bursty Rate Limiter can be slower, since it requires to make 2 requests to a store in particular conditions. It all depends on amount of unique users, project and environment. But it is still fast enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Bursty Rate Limiter’s advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is more flexible in terms of central store. Fixed rate limiter implementation is relatively similar on different stores.
Packages like rate-limiter-flexible or express-rate-limit provide ability
to choose from several stores. It simplifies changes in your application.&lt;/li&gt;
&lt;li&gt;It sets exact amount of traffic burst allowance. No surprises.&lt;/li&gt;
&lt;li&gt;It is easier to cache known time window end to avoid extra requests to store. Not really big advantage for all applications, but still could be useful for some.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bursty Rate Limiter’s disadvantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It can be slower on high traffic like 5000 requests per second and more, as it makes 2 requests to a store sometimes. Especially, if in-memory block technique is not applied.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can test it yourself and compare using this &lt;a href="https://gist.github.com/animir/f84c7566784a0505ebca617e7c760adf" rel="noopener noreferrer"&gt;gist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>ratelimit</category>
      <category>throttle</category>
      <category>tokenbucket</category>
      <category>comparison</category>
    </item>
    <item>
      <title>How to upgrade Node.js and dependencies. Results.</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Thu, 09 Jan 2020 12:47:39 +0000</pubDate>
      <link>https://forem.com/animir/how-to-upgrade-node-js-and-dependencies-results-2i2c</link>
      <guid>https://forem.com/animir/how-to-upgrade-node-js-and-dependencies-results-2i2c</guid>
      <description>&lt;p&gt;This is a how-to article reflecting back on our upgrade process from Node.js 8 to Node.js 12 for the &lt;a href="https://snuggpro.com/?utm_source=devto" rel="noopener noreferrer"&gt;Snugg Pro&lt;/a&gt; web application. Described upgrade process is fair for any Node.js version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; We upgraded from Node.js 8 to Node.js 12 and decreased the average response time of Snugg Pro (a web application) by 40%.&lt;/p&gt;

&lt;p&gt;Node.js version 8's &lt;a href="https://nodejs.org/en/about/releases/" rel="noopener noreferrer"&gt;end-of-life was at the end of 2019&lt;/a&gt;. This was (and still is) a good moment to migrate to the latest version 12 LTS. Here at Snugg Pro we had prepared the migration in the middle of November 2019. We had tested it on staging 3 weeks before upgrading our production servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How-to
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Check your dependencies
&lt;/h3&gt;

&lt;p&gt;There is a lot to upgrade in a mature javascript application. You should be smart about what is going to be upgraded and what is not.&lt;/p&gt;

&lt;h4&gt;
  
  
  Remove unused dependencies
&lt;/h4&gt;

&lt;p&gt;First of all, remove all unused dependencies. You can use a package like &lt;a href="https://www.npmjs.com/package/depcheck" rel="noopener noreferrer"&gt;depcheck&lt;/a&gt; or you could  do it manually.&lt;/p&gt;

&lt;h4&gt;
  
  
  Update dependencies for your Node.js version
&lt;/h4&gt;

&lt;p&gt;If you are going to upgrade packages incompatible with a new Node.js version only, it is ideal case. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In package.json, Change node version in engines sections. It will stop installation with wrong Node.js version.&lt;/li&gt;
&lt;li&gt;Update Node.js version any appropriate way. I use nvm: &lt;code&gt;nvm install 12.14.0&lt;/code&gt; and &lt;code&gt;nvm alias default 12.14.0&lt;/code&gt;. You can reinstall global packages with &lt;code&gt;--reinstall-packages-from=&amp;lt;old-node-version&amp;gt;&lt;/code&gt;. Read more about &lt;a href="https://github.com/nvm-sh/nvm" rel="noopener noreferrer"&gt;nvm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Try to install dependencies. &lt;/li&gt;
&lt;li&gt;Fix all errors step-by-step. Decide if you want to upgrade to the latest package version or not on your own. Usually, there are release notes, you get exact version the most suitable and not broken. It is fine to go on with not the freshest version. I upgraded babel to &lt;code&gt;6.26.0&lt;/code&gt; instead of &lt;code&gt;7.7.0&lt;/code&gt;, because the latter has conflicts with other dependencies.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Update vulnerable dependencies
&lt;/h4&gt;

&lt;p&gt;Use &lt;code&gt;npm audit&lt;/code&gt; or &lt;code&gt;yarn audit&lt;/code&gt; to find vulnerable packages. It is strongly recommended.&lt;/p&gt;

&lt;h4&gt;
  
  
  Update dependencies to the latest version
&lt;/h4&gt;

&lt;p&gt;You may want to take this opportunity to upgrade some packages to the latest major version by the way. This may require some refactoring. For example, the &lt;code&gt;joi&lt;/code&gt; package was moved to &lt;code&gt;@hapi/joi&lt;/code&gt;. This required us to change all import statements for this package but was relatively straight forward. I removed the deprecated &lt;code&gt;bcrypt-nodejs&lt;/code&gt; package in favor of the &lt;code&gt;bcrypt&lt;/code&gt; package. It affects authorization and authentication. The stakes are higher with such an upgrade but  security is critical, so it is worth the extra hassle.&lt;/p&gt;

&lt;h4&gt;
  
  
  Make some strategic choices
&lt;/h4&gt;

&lt;p&gt;Sometimes, you may need to force an unnatural version of application dependencies. This should be done sparingly but it is useful if you want to patch a security issue.  For such cases, you should use the &lt;code&gt;resolutions&lt;/code&gt; sections of package.json helps. Read more about the resolutions feature for &lt;a href="https://yarnpkg.com/lang/en/docs/selective-version-resolutions/" rel="noopener noreferrer"&gt;yarn&lt;/a&gt; or for &lt;a href="https://github.com/rogeriochaves/npm-force-resolutions#readme" rel="noopener noreferrer"&gt;npm&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Give it time
&lt;/h3&gt;

&lt;p&gt;Once all the dependencies are ready, it is time to deploy your changes to staging. No matter how sure you are or how complete your tests coverage is, you should stage it and forget it for while. The more you can wait and test the Node.js version upgrade on staging, the better your chances of catching unexpected issues. We tested it for 3 weeks, and still missed a minor bug related to error logging in one of our queue workers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing the performance of Node.js 8 and Node.js 12
&lt;/h2&gt;

&lt;p&gt;All charts are provided by Newrelic.&lt;br&gt;
Let's start from weekly service level agreement (SLA) report.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weekly SLA
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz28uu0a853iob0pmrcxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz28uu0a853iob0pmrcxz.png" alt="Snugg Pro Weekly SLA" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last two columns/weeks reflects changes after upgrade to Node.js 12. It is easy to see all metrics are significantly improved. Apdex reaches 0.95.&lt;/p&gt;

&lt;p&gt;There will be more charts with metrics next.You may want to read more about Garbage Collection in Node.js &lt;a href="https://strongloop.com/strongblog/node-js-performance-garbage-collection/" rel="noopener noreferrer"&gt;here&lt;/a&gt; or &lt;a href="https://blog.risingstack.com/node-js-at-scale-node-js-garbage-collection/" rel="noopener noreferrer"&gt;extended version here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  GC (Garbage collector) pause time
&lt;/h3&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ge4iofaskiaue7ek0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ge4iofaskiaue7ek0u.png" alt="GC pause time v8" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3yyu36ujwntr1m61dq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3yyu36ujwntr1m61dq6.png" alt="GC pause time v12" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are more spikes on Node.js 8 and some of them take up more than 2 seconds per minute. Node.js 12 takes more milliseconds per minute on average, but there is only one spike of more than 1 second per minute. Node 12 is more balanced by default.&lt;/p&gt;

&lt;h3&gt;
  
  
  GC pause frequency
&lt;/h3&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cdw8okqahl55i6r5cbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cdw8okqahl55i6r5cbj.png" alt="GC pause frequency v8" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjl8v21u6zs02217alglt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjl8v21u6zs02217alglt.png" alt="GC pause frequency v12" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Node 12 makes 2 to 3 times more garbage collection pauses. The idea here is to continue to serve clients by making more frequent but much shorter pauses, instead of stopping everything for 1 second once. &lt;/p&gt;

&lt;h3&gt;
  
  
  Memory usage
&lt;/h3&gt;

&lt;p&gt;You may already have a sense of the memory usage from above metrics. If Node.js 12 collects garbage more frequently by default, it uses noticeably less memory on average.&lt;/p&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxkxv15lfbajk0sndzoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxkxv15lfbajk0sndzoh.png" alt="Memory usage v8" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz2454mgf1w1l63psj8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz2454mgf1w1l63psj8o.png" alt="Memory usage v12" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Node.js 12 rarely consumes more than 220Mb, but Node.js 8 reaches 400Mb on peaks. Node.js 12 is smarter with memory by default.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maximum CPU time per tick
&lt;/h3&gt;

&lt;p&gt;If you don't know what is &lt;code&gt;tick&lt;/code&gt; in Node.js, you may read about &lt;a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/" rel="noopener noreferrer"&gt;event loop and ticks here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Node.js 8, we got pauses upwards of  30 seconds. This was partly due to setting &lt;code&gt;max-old-space-size&lt;/code&gt; to 440Mb for the V8 engine. Node.js would stop serving clients if the old space size reached the preset value. You can read about &lt;a href="https://strongloop.com/strongblog/node-js-performance-garbage-collection/" rel="noopener noreferrer"&gt;old space garbage collection here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Node.js 12 V8 engine settings are balanced better by default. In addition, Node.js 12 brings a fresh version of the V8 engine, and it results in big performance improvements. You can read V8 engine &lt;a href="https://v8.dev/blog" rel="noopener noreferrer"&gt;release notes here&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Moreover, Node 12 makes it easier to eliminate &lt;code&gt;babel&lt;/code&gt; on the server, since Node.js 12 supports a lot of ES2016/ES2017/ES2018/ES2019 features out of the box.&lt;/p&gt;

&lt;p&gt;At the risk of stating the obvious, upgrading to Node 12 will also ensure that you have access to all the features and security updates that come from running the latest LTS version of Node.js.&lt;/p&gt;

&lt;p&gt;This concludes our run through of the Node 8 to Node 12 upgrade.&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;br&gt;
Bye, folks.&lt;/p&gt;

&lt;p&gt;PS: Many thanks to &lt;a href="https://www.linkedin.com/in/benjamin-mailian-68160a20/" rel="noopener noreferrer"&gt;Benjamin Mailian&lt;/a&gt; – Snugg Pro Co-Founder / Head of Product for help with this article.&lt;/p&gt;

</description>
      <category>node</category>
      <category>howto</category>
      <category>upgrade</category>
    </item>
    <item>
      <title>Non-atomic increments in NodeJS or how I found a vulnerability in express-brute package.</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Thu, 18 Apr 2019 07:05:50 +0000</pubDate>
      <link>https://forem.com/animir/non-atomic-increments-in-nodejs-or-how-i-found-a-vulnerability-in-express-brute-package-1ncj</link>
      <guid>https://forem.com/animir/non-atomic-increments-in-nodejs-or-how-i-found-a-vulnerability-in-express-brute-package-1ncj</guid>
      <description>&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Use &lt;a href="https://github.com/animir/node-rate-limiter-flexible/wiki/ExpressBrute-migration" rel="noopener noreferrer"&gt;ExpressBruteFlexible&lt;/a&gt; to migrate from vulnerable express-brute package.&lt;/p&gt;

&lt;p&gt;My aim is to provide unified package &lt;a href="https://github.com/animir/node-rate-limiter-flexible" rel="noopener noreferrer"&gt;rate-limiter-flexible&lt;/a&gt; to manage expiring increments with flexible options and API, so any task related to counting events with expiration can be done with one tool.&lt;/p&gt;

&lt;p&gt;I was looking for useful features across github several months ago. There are some good packages with similar purpose, I went trough their features and issues. Sometimes opened and even closed issues contain interesting ideas. &lt;a href="https://github.com/AdamPflug/express-brute" rel="noopener noreferrer"&gt;express-brute&lt;/a&gt; has several opened issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check twice. And then once again.
&lt;/h2&gt;

&lt;p&gt;Warning orange light with the distinctive sound had switched on, when I read a ticket title &lt;a href="https://github.com/AdamPflug/express-brute/issues/46" rel="noopener noreferrer"&gt;global bruteforce count is not updating on more than 1000 concurrent requests&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once all the 1000 requests are completed, the count in the express brute store doesn’t gets increased to more than 150 ever.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I checked number of downloads of express-brute on npm. Number was not small: more than 20k downloads per week. The issue was created more than 2 years ago. "Ok, I trust those users", - I thought and closed a browser's tab. I opened that ticket again in several days and decided to test it on my own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Increment atomically. Especially in asynchronous environment.
&lt;/h2&gt;

&lt;p&gt;I want you to understand more about express-brute package. It counts number of requests and then depending on options it allows to make request or prohibits during some number of seconds. The most important option is &lt;code&gt;freeTries&lt;/code&gt;, it limits number of allowed requests. If developer sets 5, it should count 5 requests, then allow 6th and stop 7th, 8th, etc during some time window. It counts requests by user name or by user name and IP pair. This way it protects against brute-forcing passwords.&lt;/p&gt;

&lt;p&gt;You should also know, that express-brute implements get/set approach to count events. It can store data in several famous databases. Here is the process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get counter data from a store on request.&lt;/li&gt;
&lt;li&gt;Check some logic, check limits, compare expiration and current dates, etc.&lt;/li&gt;
&lt;li&gt;Set new counter data depending on results from the second step.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You probably already get that. If our application processes 1000 concurrent requests, some requests are not going to be considered, because a Set operation overwrites previous Sets. It makes it clear, why somebody sees 150 instead of 1000 in a store! Slower database, more requests can be done invisibly. More threads or processes in an application, even more Set queries overwritten.&lt;/p&gt;

&lt;p&gt;But that is not all. NodeJS &lt;a href="https://nodejs.org/es/docs/guides/event-loop-timers-and-nexttick/" rel="noopener noreferrer"&gt;event-loop&lt;/a&gt; makes it even more vulnerable. Let's see what happens with one NodeJS process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get query is sent to a store, but result is not received yet. I/O callback is queued on event-loop level. It may be in that queue more than one event-loop tick waiting a result from a store. There may be more requests to Get data from a store during that time. Those I/O callbacks are queued too.&lt;/li&gt;
&lt;li&gt;Let's say, the first Get takes 10ms. Now our NodeJS process is ready to do math with results. But it also gets nine other Get results for requests made during 10ms time window. And all these Get results have the same value of counter ready to be incremented and Set.&lt;/li&gt;
&lt;li&gt;Math made. It is brilliant. Counter is incremented. Set queries are sent to a store. The same value is set 10 times in a row. 1 counted instead of 10.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Interested in consequences?&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop theory, give us real numbers.
&lt;/h2&gt;

&lt;p&gt;First of all I reproduced it locally. But local tests are not amazing. They are not reflection of real asynchronous web world. "Ok, let's try something interesting and real", thought I. And discovered, that &lt;a href="https://ghost.org" rel="noopener noreferrer"&gt;Ghost open-source project&lt;/a&gt; uses express-brute. I was excited to make experiments on their services. No harm, honestly.&lt;/p&gt;

&lt;p&gt;The receipt is quite simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load event-loop by some amount of requests. It should be slow to have long I/O queues. I launched a small tool to make 1000 requests per second.&lt;/li&gt;
&lt;li&gt;Instantly try 1000 passwords.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was using mobile internet from other continent and a laptop with eight CPU cores. I was able to make 14 password tries instead of 5. (&lt;strong&gt;Edit:&lt;/strong&gt; I was actually able to make 216 tries instead of 5 later.) "Phew, it is nothing, Roman", - you may think. It allows to make about 5 more in 10 minutes. Then again 5 in 10 minutes, then 5 in 20 minutes, etc with default Ghost settings. About 60 tries per the first day from one laptop over mobile internet with a huge latency. 1000 computers would make 60000 password tries per day.&lt;/p&gt;

&lt;p&gt;10 minutes is default minimum delay in Ghost project. Default minimum delay set by express-brute is 500 milliseconds and maximum delay 15 minutes with 2 free tries. I  didn't test, but it would allow about 500 password tries per day from one computer. It is not safe! Especially, if this attack is a part of a bigger plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  It is important not only for banks
&lt;/h2&gt;

&lt;p&gt;Users tend to use the same password across several services. If you think your application is not interesting for hackers, you may be wrong. Hackers can use a weak security of one service for increasing probability of an attack to another service.&lt;/p&gt;

&lt;h2&gt;
  
  
  We do not have free time to fix it!
&lt;/h2&gt;

&lt;p&gt;I made it possible to migrate in a couple of minutes. There is &lt;a href="https://github.com/animir/node-rate-limiter-flexible/wiki/ExpressBrute-migration" rel="noopener noreferrer"&gt;ExpressBruteFlexible&lt;/a&gt; middleware. It has the same logic, options and methods, but it works with atomic increments built on top of &lt;a href="https://github.com/animir/node-rate-limiter-flexible" rel="noopener noreferrer"&gt;rate-limiter-flexible&lt;/a&gt; package.&lt;/p&gt;

&lt;p&gt;It is simple to migrate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa5twwk5p6a5rloedmvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa5twwk5p6a5rloedmvn.png" alt="Example of migration" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions or stories to tell, I'd glad to chat or listen about that!&lt;/p&gt;

</description>
      <category>node</category>
      <category>security</category>
      <category>authorisation</category>
      <category>bruteforce</category>
    </item>
    <item>
      <title>Safer web: why does brute-force protection of login endpoints so important?</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Sat, 02 Feb 2019 12:20:26 +0000</pubDate>
      <link>https://forem.com/animir/safer-web-why-does-brute-force-protection-of-login-endpoints-so-important-4jdn</link>
      <guid>https://forem.com/animir/safer-web-why-does-brute-force-protection-of-login-endpoints-so-important-4jdn</guid>
      <description>&lt;p&gt;We all know why. Because it saves private data and money. But that is not all. The most important, that it makes the internet safer place over all, so users can get better experience and be happier with web services.&lt;/p&gt;

&lt;p&gt;Some time ago I've created a Node.js package &lt;a href="https://github.com/animir/node-rate-limiter-flexible"&gt;rate-limiter-flexible&lt;/a&gt;, which provides tools against DoS and brute-force attacks with many features. I dived into this topic and discovered, that some javascript open-source projects don't care much about security. I am not sure about projects on other languages, but guess it is the same. There are many e-commerce projects, which don't care much too.&lt;/p&gt;

&lt;p&gt;I've recently posted an article about brute-force protection with analysis and examples. You can read full version &lt;a href="https://medium.com/@animirr/secure-web-applications-against-brute-force-b910263de2ab"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is one example, first of all as a reminder, that we (developers, PMs, CEOs, etc) should take care of it. No time to write extra code? No worries, it is easy.&lt;/p&gt;

&lt;p&gt;The main idea of protection is risk minimisation. Login endpoint limits number of allowed requests and block extra requests. &lt;br&gt;
We should create 2 different limiters:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first counts number of consecutive failed attempts and allows maximum 10 by Username+IP pair. &lt;/li&gt;
&lt;li&gt;The second blocks IP for 1 day on 100 failed attempts per day.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;redis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RateLimiterRedis&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate-limiter-flexible&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redisClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;enable_offline_queue&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;maxWrongAttemptsByIPperDay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;maxConsecutiveFailsByUsernameAndIP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;limiterSlowBruteByIP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;RateLimiterRedis&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redisClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;keyPrefix&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;login_fail_ip_per_day&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;points&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;maxWrongAttemptsByIPperDay&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;blockDuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Block for 1 day, if 100 wrong attempts per day&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;limiterConsecutiveFailsByUsernameAndIP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;RateLimiterRedis&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redisClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;keyPrefix&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;login_fail_consecutive_username_and_ip&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;points&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;maxConsecutiveFailsByUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Store number for 90 days since first fail&lt;/span&gt;
  &lt;span class="na"&gt;blockDuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;365&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Block for infinity after consecutive fails&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getUsernameIPkey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;loginRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ipAddr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;remoteAddress&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;usernameIPkey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getUsernameIPkey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ipAddr&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;resUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resSlowByIP&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="nx"&gt;limiterConsecutiveFailsByUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usernameIPkey&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;limiterSlowBruteByIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ipAddr&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;retrySecs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Check if IP or Username + IP is already blocked&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resSlowByIP&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;resSlowByIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;remainingPoints&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;retrySecs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resSlowByIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;msBeforeNext&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resUsernameAndIP&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;resUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;remainingPoints&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;retrySecs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;msBeforeNext&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;retrySecs&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Retry-After&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;retrySecs&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Too Many Requests&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;authorise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isLoggedIn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Consume 1 point from limiters on wrong attempt and block if limits reached&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promises&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;limiterSlowBruteByIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;consume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ipAddr&lt;/span&gt;&lt;span class="p"&gt;)];&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="c1"&gt;// Count failed attempts by Username + IP only for registered users&lt;/span&gt;
          &lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;limiterConsecutiveFailsByUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;consume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usernameIPkey&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;email or password is wrong&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rlRejected&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rlRejected&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;rlRejected&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Retry-After&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rlRejected&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;msBeforeNext&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Too Many Requests&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isLoggedIn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resUsernameAndIP&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;resUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;consumedPoints&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Reset on successful authorisation&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;limiterConsecutiveFailsByUsernameAndIP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usernameIPkey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;authorized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/login&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;loginRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Implementation of unblocking is up to you, there is suitable &lt;code&gt;delete(key)&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;More examples in &lt;a href="https://medium.com/@animirr/brute-force-protection-node-js-examples-cd58e8bd9b8d"&gt;this article&lt;/a&gt; and in &lt;a href="https://github.com/animir/node-rate-limiter-flexible/wiki"&gt;official docs&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>bruteforce</category>
      <category>node</category>
      <category>javascript</category>
    </item>
    <item>
      <title>GraphQL - postponed technical debt or silver bullet?</title>
      <dc:creator>Roman Voloboev</dc:creator>
      <pubDate>Fri, 01 Feb 2019 05:09:34 +0000</pubDate>
      <link>https://forem.com/animir/graphql---postponed-technical-debt-or-silver-bullet-501h</link>
      <guid>https://forem.com/animir/graphql---postponed-technical-debt-or-silver-bullet-501h</guid>
      <description>&lt;p&gt;I've never worked with GraphQL long enough, but I get kind of feeling, it gives advantages on the first year of developing, but it takes much more than custom RESTful API later.&lt;/p&gt;

&lt;p&gt;Curios, if somebody has statistic or observation on projects aged 3-5 years. &lt;br&gt;
Are there any problems with refactoring? &lt;br&gt;
Does code become too complex and takes much time to implement new features?&lt;/p&gt;

</description>
      <category>question</category>
      <category>graphql</category>
      <category>technicaldebt</category>
    </item>
  </channel>
</rss>
