<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ez Pz Developement</title>
    <description>The latest articles on Forem by Ez Pz Developement (@ezpzdevelopement).</description>
    <link>https://forem.com/ezpzdevelopement</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ezpzdevelopement"/>
    <language>en</language>
    <item>
      <title>Notes about Chapter 02 of Web Scalability For Startup Engineers</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Fri, 09 Aug 2024 21:54:16 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/notes-about-chapter-02-of-web-scalability-for-startup-engineers-579o</link>
      <guid>https://forem.com/ezpzdevelopement/notes-about-chapter-02-of-web-scalability-for-startup-engineers-579o</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQJGvobBeTTsrMDBLt2DCaA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQJGvobBeTTsrMDBLt2DCaA.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on my understanding of this chapter, the author is expressing the idea that sometimes the need to scale forces us to break good design principles. However, it’s important to first understand these principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some Good Software Design Principles:
&lt;/h3&gt;

&lt;p&gt;This is a list of the good design priciples that a software engineer should understand:&lt;/p&gt;

&lt;h4&gt;
  
  
  Simplicity:
&lt;/h4&gt;

&lt;p&gt;Keep it simple, but not too simple. How simple should we make things? We need to consider for whom we are designing and the delivery deadline.&lt;/p&gt;

&lt;p&gt;Simplicity is not how fast or quick can we implement a solution , but how easy for another software engineer to use our solution, and being able to understand the system as it goes larger and more complex.&lt;/p&gt;

&lt;p&gt;To implement this a software engineer need experience with different tools and languages, he also need to revisit his old solutions review them and try to fix them, finding a mentor or working with people who value this principles will make us progress faster.&lt;/p&gt;

&lt;p&gt;Below is the steps to make our solutions more simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hide complexity and build abstraction:&lt;/strong&gt; we should be able to achieve the local simplicity, so if when take it is easy to understand how it works , zoom int out to modules and then to an entire app must be the same case. Complexity is about how many dependencies a single component like (class, module, …) have on the other components. We need to worry about how our components interact instead of how they fulfil their duties once we start seeing a bigger picture, in larger systems we can add services where each one is responsible for a specific functionalities and showing a higher level of abstraction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid Over Engineering:&lt;/strong&gt; Do not spend time building over complicated and imaginary designs that no one uses, we should care about simplicity and most common scenarios,we should think about tradeoffs and if we will really need this.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Try TDD&lt;/strong&gt; : allow us to reduce the amount of useless functionality, and act as a doc for our code by showing the expected results and behavior and the code function. Tdd allows us to do mental shift where we start thinking about how this component is going to be used by other components first rather than implementing the internal logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn from models of simplicity:&lt;/strong&gt; check other good software and learn from there design.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Loose Coupling:
&lt;/h4&gt;

&lt;p&gt;We should keep coupling as low as possible between components, coupling is how much two components depend on each other , the less coupled the components the less they know about and depend on each other, a no coupling mean that two components does not know about each other.&lt;/p&gt;

&lt;p&gt;Keep the low coupling is important for out ability to scale.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The higher the coupling the higher the number of changes/bugs that we might add to other components that depend on the internal implementation of specific component.&lt;/li&gt;
&lt;li&gt;We will be able to hire more engineers as they don’t have to know the full details to work on specific parts of the system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Promoting Loose Coupling: we should carefully manage dependencies between modules classes and applications.&lt;/p&gt;

&lt;p&gt;Applications are the highest level functions, you might use an application for accounting, asset management or file storage.&lt;/p&gt;

&lt;p&gt;Each application will contain a set of modules (for example: pdf rendering, credit card processing, portal document) that another team member can work on independently, if we can not do this the application might have some tight coupling problems .&lt;/p&gt;

&lt;p&gt;Each application will contain a set of modules (for example: pdf rendering, credit card processing, portal document) that another team member can work on independently, if we can not do this the application might have some tight coupling problems .&lt;/p&gt;

&lt;p&gt;Each module consist of classes , a class is the smallest unit of abstraction, we should keep our functions private or protected as much as possible, the less knowledge other classes know about our class the less they are aware how the class does the job, private functions can be refactored easily as they are called only in the class , in case of other protected and public functions we should search the code before refactoring them.&lt;/p&gt;

&lt;p&gt;We should share only the minimum information and functionality that satisfies, sharing too much in early increase coupling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to avoid unnecessary coupling:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hide as much as we can and avoid using getters and setter without needing them, as these getters and setter was introduced to provide a good ide support.&lt;/p&gt;

&lt;p&gt;When client need to call methods of class in a certain order for the work to be done.&lt;/p&gt;

&lt;p&gt;Do not allow circular dependencies between layers, classes and modules, in diagrams the relations between our components should look like a directed graph more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Models of lose coupling:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A good example of loose coupling is the unix program where the commands like grep, sort, awk can be combined to perform a more complex tasks.&lt;/p&gt;

&lt;p&gt;Simple logging Facade for java (SLIF4J), it act as layer to hide the complexity of the logging from the users&lt;/p&gt;

&lt;p&gt;Read books that discuss this subject.&lt;/p&gt;

&lt;h4&gt;
  
  
  DRY:
&lt;/h4&gt;

&lt;p&gt;Things that we should avoid to ensure that we are applying this principle&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Following inefficient process:&lt;/strong&gt; we should always try to get feedbacks, apply continuous improvements, incremental change, and repeat, we should not have the mentality of “we always did it this way” or “ this is how we do it”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of automation:&lt;/strong&gt; waste time deploying manually, configuring servers, writing documentation , and testing, this tasks can be simple at the start but it is going to be hard and time consuming when software get more complex.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not invented here,&lt;/strong&gt; reinventing the wheel: building things that already exists which waste our time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copy past programming:&lt;/strong&gt; having a code that does the same things in a part of the system so we just copy that code and use it , we will face problems like bugs occurring in multiple parts , we can add a rule that we never copy past code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I wont need it again&lt;/strong&gt; so let’s just hack it quickly: we might work on a code or project quickly thinking that we will never need to go back to it again , but a problem occur and we have to go back to work on it , we will find a messy and inefficient , not tested , and unmaintainable code waiting for us, we should practice refactoring , inheritance, composition, and design patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A white paper by Nasa show that 10 to 25 percent large systems code is a result of copy past programming, In a higher level of abstraction we can create a common service that can be used by other services across the system.&lt;/p&gt;

&lt;p&gt;If the usage of a library or a component is easy everyone will use it , if not they will not and we might end up with a duplication or hacks.&lt;/p&gt;

&lt;h4&gt;
  
  
  ِConding To Contract
&lt;/h4&gt;

&lt;p&gt;Coding to contract or to interface You discuss things that are allowed to the client to see and expose only what the client need .&lt;/p&gt;

&lt;p&gt;Contract: is a set of rules that the provider of functionality agrees to fullfile , and the client will depend on , but without knowing how this functionality is implemented, as long as we keep it intact, client and providers can be modified independently.&lt;/p&gt;

&lt;p&gt;When designing the code we should create explicit contract , we should depend on the contract whenever is possible instead of implementation details.&lt;/p&gt;

&lt;p&gt;We should think of the contract as a legal documentt, in legal document we should be more details oriented , because if our contract does not cover all what we want (in case of software if our contract expose too much details) we need to renegotiate every change with our client.&lt;/p&gt;

&lt;p&gt;When we start building systems we should first define what features our client need and then expose the minimum details to achieve what he want.&lt;/p&gt;

&lt;p&gt;Http is a good example of coding to contract because it gives the possibility for different application to communicate based on a specific interface(decoupling), things like web browsers, cache sever(Varnish), web server (nginx, apache) can communicate between each other and depend on the same contract( an example of this can be found in the figure 2.5).&lt;/p&gt;

&lt;h4&gt;
  
  
  Draw Diagrams
&lt;/h4&gt;

&lt;p&gt;Diagrams are worth a thousand words, even when we don’t have so much time we should take time to design the architecture.&lt;/p&gt;

&lt;p&gt;If it is difficult for us to draw diagrams we can follow this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Draw diagrams of what we have already built , once we get comfortable with diagrams&lt;/li&gt;
&lt;li&gt;And then we start drawing diagrams while coding and working on certain features&lt;/li&gt;
&lt;li&gt;And then we start trying to do an up front design (design first and then code last)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We might want to design a circuit breaker component , is a design pattern that will prevent our system from falling by first checking if certain system is available before doing an action&lt;/p&gt;

&lt;p&gt;So we can do the following to design it:&lt;/p&gt;

&lt;p&gt;1- Create a draft of the interface (Listing 2.1)&lt;/p&gt;

&lt;p&gt;2- draft the client code, can be a unit test or just some client code that does not have to compile.(2.2)&lt;/p&gt;

&lt;p&gt;3- a draft of sequence diagram&lt;/p&gt;

&lt;p&gt;4- a draft of class diagram&lt;/p&gt;

&lt;p&gt;With this approach we can see the design from different angles , and avoid doing an unrealistic design.&lt;/p&gt;

&lt;p&gt;We have three very important diagrams: use case diagram, class diagram, and module diagrams&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use case diagrams:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They show the users of the system and the operations they perform, they show the business requirement. Can also show interaction with other systems like apis or task scheduler.&lt;/p&gt;

&lt;p&gt;We should keep it simple to we can maintain readability and maintainability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Class diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Are the best to show coupling between classes by simply watching how many dependencies a node include, and show the module structures and the relations between its classes , interfaces.&lt;/p&gt;

&lt;p&gt;Interfaces should always depend on interfaces, never on concrete classes.&lt;/p&gt;

&lt;p&gt;Classes on the other hand should depend on interfaces as much as possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Module diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A module diagram is a zoom out of class diagram, show the interaction between modules. Can be a package or any logical part responsible for a certain functionality.&lt;/p&gt;

&lt;p&gt;Module diagram focus on certain functionality that are relevant to the functionality that we want to document; when a system grow larger it is better to create a few separate diagrams to keep simple; easy to remember, and easy to recreate.&lt;/p&gt;

&lt;h4&gt;
  
  
  Single Responsibility
&lt;/h4&gt;

&lt;p&gt;Reduce complexity, make it simple…&lt;br&gt;&lt;br&gt;
Some guidelines to promote the single responsibility:&lt;br&gt;&lt;br&gt;
Keep class below two to four screens of code&lt;br&gt;&lt;br&gt;
Ensure that our class does not depend on more than 5 classes or interfaces.&lt;br&gt;&lt;br&gt;
Ensure that class has specific goal and purpose.&lt;br&gt;&lt;br&gt;
Summarize the responsibility of class and put it on the top of class , if you find it hard to summarize this mean that we are breaking the rule.&lt;/p&gt;

&lt;p&gt;An example of this is if we are adding an email verification feature to our software, we could add the verification to the code that create the user but this will make the code more complex and we will not be able to re use it in another code again, separating the logic of validation in a separate class will solve this.&lt;br&gt;&lt;br&gt;
A good way to learn more about this is to explore more about design patterns( strategy, iterator, proxy, and adapter), and learn more about Domain driven design.&lt;/p&gt;

&lt;h4&gt;
  
  
  Open Closed Principle:
&lt;/h4&gt;

&lt;p&gt;Everytime we write a code with the intent to extend it and not modify it later we are using this principle, classes should be open for extension and closed for modification.&lt;/p&gt;

&lt;p&gt;With a main reason of increasing flexibility and make future changes cheaper.&lt;/p&gt;

&lt;p&gt;An example of this is when we have to implement a sorting algorithm , with a feature to sort employees, and we implement the solution inside a class called SortingEmployees with a sort method, this will cause problems if we want to do the same thing for Cities, we will be left with two dirty solutions , we either extend the SortingEmployees , but sorting Cities does not have to know about SortingEmployees , or to copy past code from sorting employees class and past it in SortingCities.&lt;/p&gt;

&lt;p&gt;A solution here is to break the problem into smaller ones , by creating a sorter and Comparator interfaces , and then the new class SortEmployee will implement the Comparator interface and Comparator will have an instance of Sorter inside of it.&lt;/p&gt;

&lt;p&gt;MVC framework is a good example of this and specially spring framework , if framework is well designed you don’t have to update the framework code to implement features but you just extend a component and create another one based on it . in spring most of classes does not even have to now about the existing of spring mvc framework.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dependency injection
&lt;/h4&gt;

&lt;p&gt;Reduce coupling and promote the open closed principle.&lt;/p&gt;

&lt;p&gt;Reference the objects that the class depends on, it does not allow the class to know about the referenced object implementation details, or how they are assembled.&lt;/p&gt;

&lt;p&gt;Dependency injection switch from the principle of letting class inherit other classes ( a pull approach) , to adding the objects directly into the class (a push approach) to decouple the class from dependencies and make it easier to test.&lt;/p&gt;

&lt;p&gt;To understand more an example of a reader and cd can be used, fig 2.14 and 2.13 show examples of this&lt;/p&gt;

&lt;p&gt;Dependency injection make the class responsibility less and make it dumber, make it simpler.&lt;/p&gt;

&lt;p&gt;Without the need to know the contract of the injected object the class can focus in it’s own responsibility&lt;/p&gt;

&lt;h4&gt;
  
  
  Inversion of control
&lt;/h4&gt;

&lt;p&gt;DI is included here , this is a larger principle that can be applied every where in all the levels of abstractions.&lt;/p&gt;

&lt;p&gt;IOC is removing some responsibilities from the class to make it simple , and less coupled to other parts of the system&lt;/p&gt;

&lt;p&gt;You don’t have to know who will use or create your objects, how or when.&lt;/p&gt;

&lt;p&gt;It is used in a lot of frameworks , IOC look at requests and figure out which classes should be initiated and which services and components they depend on. (requests contain data like the url, headers and cookies that the ioc framework will use)&lt;/p&gt;

&lt;p&gt;Can also be called as the “we call you, you don’t call use principle” the classes does not have who is using them, when their instances are created, or how their dependencies are put together, the classes become like a plugin.&lt;/p&gt;

&lt;p&gt;Using frameworks will reduce the local complexity of our app.&lt;/p&gt;

&lt;p&gt;The factors of an IOC framework ( FIG 2.16)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can create plugins for your framework&lt;/li&gt;
&lt;li&gt;Each plugin is independent and can be removed or added at anytime&lt;/li&gt;
&lt;li&gt;Framework can auto detect these plugins , or there is a way to configure which plugins should be used&lt;/li&gt;
&lt;li&gt;Your framework define the interfaces for each plugin and should not be coupled to a plugins themself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;IOC framework , is like a having fishes in tank , you decide how many fish we want there, and we decide when to feed them , so the fish is the plugin and you are the ioc framework.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design for scale
&lt;/h3&gt;

&lt;p&gt;A difficult thing to master, we should be careful to make a balance with designing to scale and overengineering.&lt;/p&gt;

&lt;p&gt;Most of the startup fails and never need to scale (90%) , the other 9% will not need to really go for a horizontal scalability, only the 1%&lt;/p&gt;

&lt;p&gt;Similar to coupling and complexity principles discussed above , scalability problems can categorized into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding more clones: adding indistinguishable components&lt;/li&gt;
&lt;li&gt;Functional partitioning: Dividing the system into smaller subsystems based on functionality&lt;/li&gt;
&lt;li&gt;Data partitioning: keeping a subset of the data in each machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Adding more clones&lt;/strong&gt; While building a system the easiest way to scale it is to design it to be able to add more clone; a clone is the exact same of a current server or component where if we send a random request to any clone server we should get the same result. Figure 2.17 and 2.18.&lt;/p&gt;

&lt;p&gt;We need to pay attention to where we keep state and sync it between the servers.&lt;/p&gt;

&lt;p&gt;Scaling with clones work best for stateless servers or services (services that does not have any local state)&lt;/p&gt;

&lt;p&gt;The problem with this scaling technique is in syncing data between stateful services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Functional partitioning:&lt;/strong&gt; The main idea is to look at the parts that work on the same functionality and create a separate subsystem of them.&lt;/p&gt;

&lt;p&gt;In term of infrastructure it is the separation of our data center into multiple server types for example: Message queue server, cache server, Webserver, load balancer…&lt;/p&gt;

&lt;p&gt;It is dividing the system into independent services, it help us allow the coding to contract principle , it is often used in web services layer , and one of the basics of service oriented architecture , this strategy also allow us to analyze and get the needs of each services independently and scale them separately.&lt;/p&gt;

&lt;p&gt;It is also common to break the app into a database service layer and web service layer.&lt;/p&gt;

&lt;p&gt;It is common in large companies to separate the app into a smaller independent services , where each team can work on a service separately analyze it and try to scale it.&lt;/p&gt;

&lt;p&gt;Drawbacks of this approach is that it require more management and effort to start with, We can not keep rewriting our system and divide it endlessly , it might not solve our scalability problem there might be some other problems like architecture or optimization issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Partitioning&lt;/strong&gt; Using a manifestation of share-nothing principle, where each service has it’s own subset of data, he have a complete control of his internal state without the need to sync state with others, no need for locking( locking is when we have multiple server trying to access the same resources and modify the data we need to handle concurrency to avoid data corruption).&lt;/p&gt;

&lt;h4&gt;
  
  
  Design for self healing
&lt;/h4&gt;

&lt;p&gt;And because our systems might fail at any moment we should design them with high availability and self healing in mind.&lt;/p&gt;

&lt;p&gt;We want to make our system always available for our users , even when experiencing partial failure or during maintenance (a system is considered available as long as it perform it’s functions as expected from client prospective).&lt;/p&gt;

&lt;p&gt;There is no specific measurement of availability, but it can measured as the numbers of nines , if a system is available 99% of time it means which means that he is going to be out for 365 days * 0.01 = 3.65, and if we do 99.999% which make it available only 5 min a year.&lt;/p&gt;

&lt;p&gt;The larger the system get the higher the chance of failure because we will be communicating with more services/data stores and components when the system get bigger, failure need to be considered as a norm during design not as special condition.&lt;/p&gt;

&lt;p&gt;NEtflix use a system called chaos monkey , the system cause the failure of some component , so the team can test the availability of there Software.&lt;/p&gt;

&lt;p&gt;Crash only: means that whenever a system failed and then start operating again he must be able to detect failure and fix the broken data&lt;/p&gt;

&lt;p&gt;To ensure high availability of a system we should remove the single points of failure, and to ensure graceful failure, which means that our system should switch to a backup/clone component without impacting the user or cause loss/corruption of data.&lt;/p&gt;

&lt;p&gt;We can draw a diagram of all our system components and ask ourself if we shutdown one service what could happen, we can then discuss the possibility of adding redundancy and see if it is going to be cheap or not, we also prepare a disaster recovery plan.&lt;/p&gt;

&lt;p&gt;If we achieved the a high level of availability and handling graceful failure , we can start thinking about designing a self healing system, who can fix his issues without the need of human interactions, this is hard and expensive to build.&lt;/p&gt;

&lt;p&gt;An example of a self healing system is Cassandra when a node failed the server stop requests that are coming to this node (this is the only stage where users might experience some failure or downtime), once the node is detected as failed the client continue reading data from other nodes in the cluster that provide redundancy of the failed node, when the failed node is back to work the system automatically provide it with the missing data&lt;/p&gt;

&lt;p&gt;Mean time to recovery, measure how fast we can detect, repair and recover from a failure, the higher the availability of our system and it can be measured with this equation mean time to failure / (mean time to failure + mean time to recovery).&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary:
&lt;/h3&gt;

&lt;p&gt;The cleanest solution is not always the best solution for the business if it costs more time , money , and need more management, we need to think and make the best possible solutions for the business, we need to make tradeoffs in term of scalability, flexibility, high availability, costs, and time to market.&lt;/p&gt;

&lt;p&gt;Don’t hesitate to challenge the rules, but it’s essential to understand the tools, basics, and principles of our craft first. This way, we can make informed decisions and balance tradeoffs effectively&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>systemdesignintervie</category>
      <category>programming</category>
    </item>
    <item>
      <title>Notes about The Chapter 01 of Web Scalability For Startup Engineers</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Fri, 14 Jun 2024 22:46:44 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/notes-about-the-chapter-01-of-web-scalability-for-startup-engineers-28oc</link>
      <guid>https://forem.com/ezpzdevelopement/notes-about-the-chapter-01-of-web-scalability-for-startup-engineers-28oc</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AOy5mflj3DmODSQIEEfv7wA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AOy5mflj3DmODSQIEEfv7wA.jpeg" alt="Notes about The Chapter 01 of Web Scalability For Startup Engineers"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notes about The Chapter 01 of Web Scalability For Startup Engineers (Image by Herbert Austfrom Pixabay)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last week, I wanted to learn more about system design and scalability. So, I started by watching YouTube videos and reading LinkedIn posts to get a general idea. I also looked into book reviews, and it seems like “Designing Data-Intensive Applications” is a popular choice among many people.&lt;/p&gt;

&lt;p&gt;I wanted to begin with something simpler since “Designing Data-Intensive Applications” covers a lot of detailed topics. After talking to a friend and hearing recommendations from others in our field, I found out about “Web Scalability for Startup Engineers.” This week, I started with the first chapter, which gave an overview of the topics the book will cover.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notes:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What is Scalability:
&lt;/h4&gt;

&lt;p&gt;the ability of our system to handle more data, request, users and transactions , we must be able to scale up and down in a cheap and quick way.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scalability Dimensions:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Handling more data&lt;/strong&gt; : storing more content, with popularity if data analytics and big data this play an important role.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handling higher concurrency levels:&lt;/strong&gt; how many users (open connections, active threads, messages being processed at the same time) can our system server at the same time, how can we solve the problems of servers having few processing units, ensure parallel execution of code to ensure data consistency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handling higher interaction rates&lt;/strong&gt; : how many times clients exchange communication with the server , must be able to handle responses quicker, faster read and writes and higher concurrency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Scalability vs Performance:
&lt;/h4&gt;

&lt;p&gt;Scalability is related to performance; Scalability determines the capacity to handle more users, Performance refers to how swiftly the system handles requests under load, such as the speed at which it can respond to 100 user requests every 5 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability is also implied when it comes to team members, the more people a team the harder the communication.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  DNS:
&lt;/h4&gt;

&lt;p&gt;Is usually hosted in a different server , the customer connect to it get the dns server get the ip address of the domain and then he start requesting content from our server.&lt;/p&gt;

&lt;h4&gt;
  
  
  VPS:
&lt;/h4&gt;

&lt;p&gt;Is virtual machine for rent; hosted with other virtual machines in a one physicall machine, this aproach is not good&lt;/p&gt;

&lt;h4&gt;
  
  
  Single Server configuration:
&lt;/h4&gt;

&lt;p&gt;this option is better but we might switch if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our user base grow, and it will take more cpu, i/o to serve our users&lt;/li&gt;
&lt;li&gt;Database grow we added alot of data , queries will take more time to execute so we need more cpu and I/O power.&lt;/li&gt;
&lt;li&gt;Added new features which are going to make users interact more with the system, and this will need more resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How Can We Scale Vertically:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1- Adding more I/O capacity by adding more hard drives in RAID arrays:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RAID arrays: is a set of hard or solide state drives linked together to form a one logical storage, to protect data in case of failures, here is some popular RAID config used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RAID 0( None drive can fail) : the data is split evenly between two drives&lt;/li&gt;
&lt;li&gt;RAID 1( 1 drive can fail) : mirroring, at least two drive have the exact copy of data , if a disk fail others will be able to continue working&lt;/li&gt;
&lt;li&gt;RAID 5( 1 Drive failure): Stripping with parity, requires the usage of 3 drives, splitting the data across multiple derives , but also has parity distributed across the drives&lt;/li&gt;
&lt;li&gt;RAID 6( 2 drives failure) : Stripping with double parity, similar to RAID 5, but the parity is written in two additional drives&lt;/li&gt;
&lt;li&gt;RAID 10 ( up to one drive in each array ): combine RAID 1 and RAID 0, mirror all data in secondary drives, and use stripping acrros each set of drives to speed up data transfers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Parity: is a calculated value stored in a drive from other drives in the array , used to reconstruct the data in a case of failure of one of the drives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2- Improve I/O access time by switching to solide state drives (SSD):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSD is fasster than hdd , but for databases it is not the case because alot of databases like mysql are optimized for sequentiel read , and databases like cassandra goes further and use mostly sequentiel read.&lt;/p&gt;

&lt;p&gt;Sequentiel disk operations is like reading a disk page by page, while radoù disk I/O is like picking a random page in a book each time, ssd is so much faster than hdd when it comes to random , and also sequentiel because of the lack of physical head.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3- Reducing I/O by increasing RAM:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;More caching space, more memory for the app to work in , better for DB because they cach frequently accessed data in ram.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4- improve network throughput:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;by upgrading network interfaces, upgrade network adapters , upgrade providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5- switching to a more powerful server:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a server with more virtual cores, server with 12 or 24 threads( virtual cores), processes do not have to share cpu, cpu will perform less context switches.&lt;/p&gt;

&lt;p&gt;Vertical scalability is a simple approach because we dont have to rearchitect anything we just need to upgrade our hardware and this come with cost.&lt;/p&gt;

&lt;p&gt;OS may prevent use from scalling vertically also, in some databases increasing CPUs won’t do any improvements because of increasing lock contention.&lt;/p&gt;

&lt;p&gt;Locks: are used to sync access between threads to a specific resources like memory or files, lock contention happen when a one lock is used for a big resources that has a lot of operations, to solve this fine grained locks must be introduced which create a much more specific locks for each task in the resource and allow threads to access the resource more effitiantly. Therefore adding more cpu when lock contention happen does not have any significant impact.&lt;/p&gt;

&lt;p&gt;We should Design app with high concurrency in mind, so when we add more cores it is not going to be a waste.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation Of Services:
&lt;/h3&gt;

&lt;p&gt;Is the separation of services (db, ftp, dns, cache, web server ) into multiple phsysicall servers, when we do this we have no room to grow.&lt;/p&gt;

&lt;p&gt;Deviding a system based on functionality to scale is called functional partitioning, for example dividing the admin service and client service in multiple physicall servers.&lt;/p&gt;

&lt;p&gt;CDN: content delivery network , a hosted server that cares of the global distribution of static content like js, css, images ,videos, it works as an http proxy, if a client need to download a static content the cdn check if he has the content if not he request it from the server and cach it , and other clients will be server from cdn without even contacting the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Horizontal Scalability
&lt;/h3&gt;

&lt;p&gt;It has to be considered before the application is built,systems that are true horizontally scalable does not need a powerful machine, they usually run on multiple cheap machines, and you can add as much servers as you want, you avoid the high price of buying top tier hardware , and vertical scalability ceiling problem( no more powerful hardwares).&lt;/p&gt;

&lt;p&gt;Scaling with multiple data centers in case of having a global audience is important as it provide protection against rare outage events , and client in other countries can get a response faster.&lt;/p&gt;

&lt;p&gt;Scaling horizontally with webservers and caches, is easier than scaling persistence stores and databases.&lt;/p&gt;

&lt;p&gt;The usage of a round robin dns is the choice if we use multiple web server, this will distribute traffic between servers, what round robin dns does is that he just get a domain name and allow use to map the domain name to multiple ip addresses, when a client send a request the round robin dns map the client request to a server , a two clients might connect to different servers without even realising&lt;/p&gt;

&lt;p&gt;A data center infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontline: first part that user devices interactwith, does not cotaine any business logic, help us, can exist in or outside of our datacenter,&lt;/li&gt;
&lt;li&gt;First client send a request , geoDNS to resolve the domain name and send the closet load balancer ip address ; then it get distributed to frontend cach server or directly to frontend app server&lt;/li&gt;
&lt;li&gt;CDN, Load balancers, reverse proxies , can be used and hosted by third parties,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Load balancer: is a hardware or software that allow the addition and removal of servers dynamically, it also help in distributing traffic to multiple servers.&lt;/p&gt;

&lt;p&gt;Reverse proxy: is an intermediate between client request and the actual server , can be used as a load balancer , or for web acceleration where compress inbound and outbound data and also cache content and also do the ssl encryption which boost main server performance (he have proxy taking care of side tasks for him), can act as a server who preserve the anonymity and encapsulate how the real internal structor look like because clients can access our data center or internal structor from a one record locator or url.&lt;/p&gt;

&lt;p&gt;Edge cache server: is http cach server located near customers, can cache an entire http request , it can cach an entire page, or partially cache it and delegate some other parts to the server, it can also decide that the page is not cachable and delegate the entire page to the server.&lt;/p&gt;

&lt;p&gt;A single data center can scale using edge cach and cdn, and It is not necessary to use alot of components and technologies to scale , instead we should use what is necessary&lt;/p&gt;

&lt;p&gt;The application architecture should not involve around technologies (programming languages, databases) , it should focus on the domain model to create a mental picture of the problem that we are trying to solve.&lt;/p&gt;

&lt;p&gt;The Frontend Must be kept as dumb as possible, and allow it to use message queues , and the cache backend , caching the html page along with the database query is more efficient than just caching the database query, Web services is critical part of the application as it contains the most important parts of our business logic.&lt;/p&gt;

&lt;p&gt;Servers might have job processing servers or job running on schedule, with the goal of handling notifications, order fullfilement or some other high latency tasks.&lt;/p&gt;

&lt;p&gt;SOA: service oriented architecture focused on solving business needs , where each service has very clear contract and use the same communication protocols.&lt;/p&gt;

&lt;p&gt;SOA has some alternatives; layered architecture, hexagonal architecture, event-driven architecture.&lt;/p&gt;

&lt;p&gt;Multi layer architecture is a way to represent functionality in a form of different layers, components in lower layer expose functionality to upper layer , lower layers can never depend on the functionality of a top layer.&lt;/p&gt;

&lt;p&gt;In a layered architecture, having richer features in a specific layer usually leads to a more stable API. Conversely, simpler features may result in a less stable API, as changes to lower layers’ APIs can be costly due to dependencies from many other APIs.&lt;/p&gt;

&lt;p&gt;Hexagonal architecture assume that the business logic of the app is the center of the app , there is a contract between the business logic and non business logic component , but no layers, the main reason of this is that we can replace any of the non business logic component at any time without effecting our core app.&lt;/p&gt;

&lt;p&gt;Event-Driven Architecture (EDA) shifts the focus from responding to requests to handling actions. It works by creating event handlers that wait for an action to occur and then react to it.&lt;/p&gt;

&lt;p&gt;In all the architectures dividing the app into smaller units that function independently will show a performance benefits, we can think of these web services as a an autonomous applications where each one become a separate app, each app hide it’s implementation details and present a high level api.&lt;/p&gt;

&lt;p&gt;Message queue, app cache , main datastore, … , we should think of them as plugins that we can replace at any time with another technology.&lt;/p&gt;

&lt;p&gt;Isolating third party services is a good for us as we dont know if they are scalable, or if we have full control on them , so isolating them and make it possible to replace them later is beneficial.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>architecture</category>
      <category>softwaredevelopment</category>
      <category>systemdesignintervie</category>
    </item>
    <item>
      <title>Create A Vim Plugin For Your Next Programming Language, Indentation and Autocomplete</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Fri, 14 Jun 2024 22:45:59 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/create-a-vim-plugin-for-your-next-programming-language-indentation-and-autocomplete-4314</link>
      <guid>https://forem.com/ezpzdevelopement/create-a-vim-plugin-for-your-next-programming-language-indentation-and-autocomplete-4314</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F640%2F1%2A-Tnknln64FH-mN1QWd2Tkg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F640%2F1%2A-Tnknln64FH-mN1QWd2Tkg.gif" alt="Create A Vim Plugin For Your Next Programming Language, Indentation and Autocomplete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/abderahmanemustapha/create-a-vim-plugin-for-your-next-programming-language-structure-and-syntax-highlight-1dog-temp-slug-8281004"&gt;previous post&lt;/a&gt;, i discussed how to structure our vim plugin, and how we can add the beautiful syntax highlight feature.&lt;/p&gt;

&lt;p&gt;In this short post, i will explain how to add a simple auto-complete in addition to indentation, if you want to see a full example of a working vim extension for a new programming language, please check this &lt;a href="https://github.com/abderrahmaneMustapha/vim-iop" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Indentation
&lt;/h3&gt;

&lt;p&gt;So first we need to create a new file in ftplugin directory, if you want to know more about autoload and other vim extension folders make sure to check the first part of this blog post &lt;a href="https://dev.to/abderahmanemustapha/create-a-vim-plugin-for-your-next-programming-language-structure-and-syntax-highlight-1dog-temp-slug-8281004"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The name of the new file is iop.vim, we will show a short example of how we handled indentation in our extension, you can check the full file here&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:inoremap { {&amp;lt;CR&amp;gt;&amp;lt;tab&amp;gt;&amp;lt;CR&amp;gt;&amp;lt;bs&amp;gt;&amp;lt;bs&amp;gt;&amp;lt;bs&amp;gt;&amp;lt;bs&amp;gt;}&amp;lt;up&amp;gt;&amp;lt;tab&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Essentially, we are telling our IDE that if a user opens a bracket and then hits enter, and of course, when we press enter, we will jump to a new line with this command above the cursor will not begin at the beginning of the line but will left some space behind, the space here is represented by the .&lt;/p&gt;

&lt;p&gt;In our case, we defined the space in the fifth line of our file as you can see below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set tabstop=4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Autocomplete
&lt;/h3&gt;

&lt;p&gt;It is time to discuss autocomplete, so we can end this blog post.&lt;/p&gt;

&lt;p&gt;You will need to create two new folders text and &lt;a href="https://github.com/abderrahmaneMustapha/vim-iop/tree/main/autoload" rel="noopener noreferrer"&gt;autoload&lt;/a&gt; in the text folder we will put all the possible worlds in our new programming language, in our case we created 4 files, one for decorators, another one for types, values, and iop.text file is for identifiers.&lt;/p&gt;

&lt;p&gt;These four files are imported in autoload/iopcomplete.vim file, this last-mentioned file contains the autocomplete logic you can check the full file here at this &lt;a href="https://github.com/abderrahmaneMustapha/vim-iop/blob/main/autoload/iopcomplete.vim" rel="noopener noreferrer"&gt;link &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This was just a general description of what we did to create a new simple vim extension, for this new programming language, you can understand more by reading the code in the&lt;a href="https://github.com/abderrahmaneMustapha/vim-iop" rel="noopener noreferrer"&gt;Github repository.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vim</category>
      <category>extension</category>
    </item>
    <item>
      <title>Create A Vim Plugin For Your Next Programming Language, Structure, and syntax highlight.</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Fri, 14 Jun 2024 22:09:33 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/create-a-vim-plugin-for-your-next-programming-language-structure-and-syntax-highlight-4gd1</link>
      <guid>https://forem.com/ezpzdevelopement/create-a-vim-plugin-for-your-next-programming-language-structure-and-syntax-highlight-4gd1</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AFzAafbTm2QR9L3Nw2yrG6Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AFzAafbTm2QR9L3Nw2yrG6Q.png" alt="Create A Vim Plugin For Your Next Programming Language, Structure, and syntax highlight."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vim is a text-based editor, opensource, it is also an improved version of the old vi UNIX, Vim has so many features including multi-level undo, syntax highlighting, command line history, on-line help, spell checking, filename completion, block operations, script language, etc.&lt;/p&gt;

&lt;p&gt;If we want to talk about compatibility, Vim runs under MS-Windows (XP, Vista, 7, 8, 10), macOS, Haiku, VMS, and almost every OS based on UNIX.&lt;/p&gt;

&lt;p&gt;In today's post, I would like to show you how to write your own vim extension for a new programming language, I wrote this plugin with the help of my two coworkers &lt;a href="https://github.com/imen-ben" rel="noopener noreferrer"&gt;Imen&lt;/a&gt; and &lt;a href="https://github.com/theLegend98" rel="noopener noreferrer"&gt;Djamel&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;First, let me introduce you to IOP (&lt;em&gt;Intersec Object Packer) it is a method to serialize data in different communication protocols, inspired by&lt;/em&gt; &lt;a href="https://developers.google.com/protocol-buffers/docs/overview" rel="noopener noreferrer"&gt;Google Protocol Buffers&lt;/a&gt;, IOP syntax looks like &lt;em&gt;D language syntax, and all of this is according to&lt;/em&gt; &lt;a href="https://intersec.github.io/lib-common/lib-common/iop/base.html" rel="noopener noreferrer"&gt;&lt;em&gt;IOP official documentation&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Structure of a Vim Plugin&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When we start working on our extension, we will create a root folder under the name &lt;a href="https://github.com/abderrahmaneMustapha/vim-iop" rel="noopener noreferrer"&gt;vim-io&lt;/a&gt;p, which is exactly what we picked as a name for our vim extension.&lt;/p&gt;

&lt;p&gt;This directory will contain three other important folders which are :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;autoload: is a technique for blocking the loading of your plugin’s code until it’s required, in our case, we will implement the autocomplete feature in this folder.&lt;/li&gt;
&lt;li&gt;ftdetect: or file type detection, has a clear purpose to figure out what file type a given file is.&lt;/li&gt;
&lt;li&gt;ftplugin: contains scripts that run automatically when vim detect a file opened or created by a user, in our case this file will contain the logic to implement indentation.&lt;/li&gt;
&lt;li&gt;scripts: contains a script that implements syntax highlight.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Detect File Type
&lt;/h3&gt;

&lt;p&gt;In this section, we add the code to set file type for IOP files, but first, our root folder &lt;strong&gt;vim-iop&lt;/strong&gt; must look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim-iop
------- ftplugin
------- ftdetect
------- syntax
------- autoload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this part we need to create a new file ftdetect/&lt;a href="https://github.com/abderrahmaneMustapha/vim-iop/blob/main/ftdetect/iop.vim" rel="noopener noreferrer"&gt;&lt;em&gt;iop.vim&lt;/em&gt;&lt;/a&gt;, add this code below to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;" ftdetect/iop.vim
autocmd BufNewFile,BufRead *.iop setfiletype iop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Syntax Highlight
&lt;/h3&gt;

&lt;p&gt;In this section, we will write some vim script in addition to some regex, so we can add the syntax highlight feature to our Vim extension.&lt;/p&gt;

&lt;p&gt;Before we can start coding i want to mention that IOP has Basics types which are : int, uint, long, ulong, byte, ubyte ... and more, plus four complex types struct, class, union, enum , if you want to learn more about this types make sure to check this &lt;a href="https://intersec.github.io/lib-common/lib-common/iop/base.html" rel="noopener noreferrer"&gt;&lt;em&gt;link&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So for the code below, we need add the logic to highlight the IOP types mentioned in this part of &lt;em&gt;I&lt;/em&gt;&lt;a href="https://intersec.github.io/lib-common/lib-common/iop/base.html" rel="noopener noreferrer"&gt;&lt;em&gt;OP documentation.&lt;/em&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"syntax/iop.vim

**_syntax keyword_ iopComplexType** class enum union struct module **nextgroup** =iopComlexTypeName **skipwhite**
 **_syntax keyword_ iopBasicTypes** int uint long ulong xml 
**_syntax keyword_ iopBasicTypes** byte ubyte short ushort void 
**_sytanx keyword_ iopBasicTypes** bool double string bytes

" complex types name
**_syntax match iopComlexTypeName_**" **\w\+**" contained
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;as you can see we have &lt;strong&gt;iopComplexType&lt;/strong&gt; , &lt;strong&gt;iopBasicTypes&lt;/strong&gt; both of these variables contain the different complex and basic types of IOP, we also want to tell our extension that each complex type is followed by a name and that white space should be ignored, after this, we need to tell our vim extension to highlight this types by adding the code below in the bottom of syntax/iop.vim.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"syntax/iop.vim
**_highlight link_**  **iopComplexType** Keyword
**_highlight link_ iopBasicTypes** Type
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in the end and after adding this extension to our vim ide, we will see something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AclJKJaBcuSz9fDn60PlOPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AclJKJaBcuSz9fDn60PlOPQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IOP syntax contains decorators also, we are going to write some regular expressions in order to highlight this, so just add the code below to our syntax/iop.vim file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"syntax/iop.vim

syntax match iopDecorator /^ **\s*** @/ nextgroup=iopDecoratorFunction
syntax match iopDecoratorFunction contained / **\h** [a-zA-Z0-9_.]*/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the first line of the code above we are telling our vim extension that this decorator can start with a zero or multiple white spaces followed by an “@” ( /^ &lt;strong&gt;\s&lt;/strong&gt; *@/) , and the &lt;strong&gt;nextgroup&lt;/strong&gt; keyword means that after “@” there is a name of this decorator, the name of this decorator can contain all &lt;em&gt;alphabet&lt;/em&gt; letters whether it is upper or lower case, this decorator name can also contain numbers and the two special characters “_ “ and “.”.&lt;/p&gt;

&lt;p&gt;After telling our vim extension to highlight the decorators.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"syntax/iop.vim

highlight link iopDecoratorFunction Function
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an example of what we will see in our vim IDE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AabXRlK8jyTHfgncmEu23GQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AabXRlK8jyTHfgncmEu23GQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;if you want the complete implementation of vim-iop syntax highlight make sure to check this &lt;a href="https://github.com/abderrahmaneMustapha/vim-iop/tree/main/syntax" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That's it for now, in the next post, i will show you how to add autocomplete and indentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;References:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learnvimscriptthehardway.stevelosh.com/" rel="noopener noreferrer"&gt;Learn Vimscript the Hard Way&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-vundle-to-manage-vim-plugins-on-a-linux-vps" rel="noopener noreferrer"&gt;How To Use Vundle to Manage Vim Plugins on a Linux VPS | DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://intersec.github.io/lib-common/lib-common/iop/base.html" rel="noopener noreferrer"&gt;IOP&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>communicationprotoco</category>
      <category>vim</category>
      <category>extension</category>
      <category>programminglanguages</category>
    </item>
    <item>
      <title>Ivy the Angular Compiler.</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Wed, 26 Jul 2023 22:42:19 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/ivy-the-angular-compiler-3d8o</link>
      <guid>https://forem.com/ezpzdevelopement/ivy-the-angular-compiler-3d8o</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F767%2F0%2AbVFLmVr_yeZrzJzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F767%2F0%2AbVFLmVr_yeZrzJzx.png" alt="Thumbnail for Ivy the Angular Compiler."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm writing this blog post to summarize what I have learned about Ivy, the angular compiler, after reading some blog posts and watching talks by different software engineers and developers. Some of them are in the main team that works on angular.&lt;/p&gt;

&lt;h3&gt;
  
  
  A compiler for a frontend framework why?
&lt;/h3&gt;

&lt;p&gt;When I first heard about the Ivy Angular compiler it was something weird that I could not understand, after doing some research I found out that there are a lot of other compiled frontend frameworks like Svelte, and Solid.&lt;/p&gt;

&lt;p&gt;Based on what I have learned, we can filter these frameworks based on 3 factors which are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compiling approaches:&lt;/strong&gt; some frameworks &lt;strong&gt;compile everything&lt;/strong&gt; , while others require &lt;strong&gt;almost no compilation&lt;/strong&gt; , the last mentioned approach makes the framework do most of the work on runtime, while some other frameworks use a hybrid of these two approaches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The language used in compilation logic:&lt;/strong&gt; some frameworks use Javascript or Typescript as logic for their compilation process, while others use Js alternatives like ELM or MINT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Templating languages:&lt;/strong&gt; some use an HTML-first approach, according to &lt;a href="https://ryansolid.medium.com/" rel="noopener noreferrer"&gt;Ryan Carniato&lt;/a&gt; the language treats the source file as an enhancement of HTML, there is also another type of templating language which is &lt;a href="https://facebook.github.io/jsx/" rel="noopener noreferrer"&gt;JSX&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another reason why it is a good idea to have a compiler for a frontend framework is the performance that comes with it, it produces more optimized and faster code, a faster dom-manipulation and startup when running compilation, and it also reduces the bundle size.&lt;/p&gt;

&lt;p&gt;In addition to the last-mentioned cons, a compiler optimizes and reduces the possibilities for a framework at runtime.&lt;/p&gt;

&lt;p&gt;No one can argue that even with all of this optimization, a framework can not be faster than vanilla js, but it is less verbose, and it is easier and faster to write and create complex applications using a framework than using Js alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Ivy compiler as an example&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ivy is another rewrite of the Angular compiler and runtime to achieve a better build time, and bundle size, and maybe to get rid of the zone.js when it comes to change detection, and more optimization in the generated code by using some principles like :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tree shaking: the process of removing unused and unreferenced code using static analysis&lt;/li&gt;
&lt;li&gt;Locality: an approach to make build and compilation go faster by compiling each component independently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the main goal of Ivy is to turn these templates which use the declarative coding approach and turn it into imperative JS/TS code, this includes generating a code from the templates documents, adding a change detection mechanism, and applying the changes when they happen.&lt;/p&gt;

&lt;p&gt;Ivy can do his job using a JIT or AOT approach, both of these approaches have some advantages and disadvantages I recommend checking &lt;a href="https://www.freecodecamp.org/news/just-in-time-compilation-explained/" rel="noopener noreferrer"&gt;this post&lt;/a&gt; to understand more about JIT, for AOT I recommend &lt;a href="https://angular.io/guide/aot-compiler" rel="noopener noreferrer"&gt;this one&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Angular JIT Compiles your application in the browser at runtime it was the default approach until Angular 8, while AOT run _ngc c_ompiles your application and libraries at build time, it is the default starting in Angular 9.&lt;/p&gt;

&lt;h4&gt;
  
  
  How this compiler work
&lt;/h4&gt;

&lt;p&gt;Ivy compiler architecture was inspired by Ts compiler Architecture, while the typescript compiler has 3 steps (Program creation, type checking, and Emit), and the Angular compiler has two additional steps Analysis and Resolve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F960%2F1%2AaT2UA-CxhlK6WKAd9cs6Fg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F960%2F1%2AaT2UA-CxhlK6WKAd9cs6Fg.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Program creation:&lt;/strong&gt; in this step angular tries to discover all the file resources to understand the program (app or library), starting from tsconfig.json, and then expand and discover all the other files, in addition to what typescript compiler do angular team has customized this step by adding some more features and code to make it compatible with what an Angular dev need, it is because of this customization developers can use libraries like ng factory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analysis:&lt;/strong&gt; the compiler will go through all the classes in the files, find all the ones that are decorated with an Angular decorator, and try to understand each component and directive in isolation, in this step the compiler does not have an idea about which modules this components belongs to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resolve:&lt;/strong&gt; try to understand everything about the library or the app as a whole, in this step the compiler will try to find each component module, do some optimization, decisions, and handle errors, he also tries to understand the structure of the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type Checking:&lt;/strong&gt; its type script time to check if there are any errors in our code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Emit:&lt;/strong&gt; this step is the most expensive one where the Angular compiler generates code js for each class that has an angular decorator.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another cool feature that Ivy has is template type checking, what can this feature do is to throw an error and show you the exact place of the error in the template ( by line and columns), according to Alex Rickabaugh this was something tricky to add.&lt;/p&gt;

&lt;p&gt;in the end, I recommend watching Alex Rickabaugh's amazing talk about the ivy compiler, to get more about ivy, and go deeper into how the Angular compiler work.&lt;/p&gt;

&lt;h3&gt;
  
  
  references :
&lt;/h3&gt;

&lt;p&gt;My blog post was inspired by the amazing references below, I did my best to summarize what I learned from these blog posts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tomdale.net/2017/09/compilers-are-the-new-frameworks/" rel="noopener noreferrer"&gt;Compilers are the New Frameworks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/godspowercuche/a-look-at-compilation-in-javascript-frameworks-1pf6-temp-slug-64187"&gt;A Look at Compilation in JavaScript Frameworks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://javascript.plainenglish.io/a-case-for-compile-to-javascript-interface-frameworks-a684b361884f" rel="noopener noreferrer"&gt;A Case for Compile to JavaScript Interface Frameworks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.angularminds.com/blog/article/what-is-angular-ivy.html" rel="noopener noreferrer"&gt;All About Angular Engine Ivy in 5 mins | Angular Minds&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/anphffaCZrQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.ninja-squad.com/2019/05/07/what-is-angular-ivy/" rel="noopener noreferrer"&gt;https://blog.ninja-squad.com/2019/05/07/what-is-angular-ivy/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What I’ve Learned About WebAPIS, An Introduction</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Sun, 23 Jul 2023 23:05:18 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/what-ive-learned-about-webapis-an-introduction-41l4</link>
      <guid>https://forem.com/ezpzdevelopement/what-ive-learned-about-webapis-an-introduction-41l4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f2HHotyI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ACg03eSKwXviDlOG7ue2gQA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f2HHotyI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ACg03eSKwXviDlOG7ue2gQA.png" alt="Thumbnail for what i have learned about web apis post" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In In this post, I’d like to discuss what i learned about web APIs while working on my master’s thesis, i wanted to write and summarize what I learned in my notebook or on Google Docs, but I decided that sharing and summarizing what i learned on the Internet would be more effective, as I may receive additional suggestions or criticism, it will provide an opportunity for me to learn new things.&lt;/p&gt;

&lt;p&gt;The following post serves as an introduction; i’ll discuss the most important things I learned and what i’ll discuss in the following posts.&lt;/p&gt;

&lt;p&gt;as the majority of us know (developers, software engineers …) web APIs or an application programming interface is like middleware or a messenger that handles requests and guarantee the interaction between application, devices, and systems.&lt;/p&gt;

&lt;p&gt;Another definition is that web APIs is an online programming interface of an organization or an enterprise it enables other applications to interact with its backend systems, as mentioned on the H&lt;a href="https://www.hcltech.com/technology-qa/what-is-api-integration#:~:text=An%20application%20programming%20interface%20"&gt;cltech blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After talking about APIs definition another important term that creates a connection between the different two or more apps via their APIs, allowing those systems to share data, and it's obvious that we are talking about &lt;strong&gt;API integration.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As we said in the previous paragraph “a connection between the different two or more apps via their APIs”, some of these APIs can be public and nonprofits while others are designed with a business goal and a need of other external developers, who are a critical component in reaching the targeted goal which this API was designed and created for.&lt;/p&gt;

&lt;p&gt;An API with aim of generating revenue usually use three archetypes is according to a recent research paper that i read¹, these archetypes are :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Professional Services:&lt;/strong&gt; provide access to their APIs, software as a services or data to a third party consumer through a standarized interface and a clear pricing plans in order to generate direct incomes from this service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mediation Services:&lt;/strong&gt; bring their services to light by making them available for external or third party developers, this developers enter another party for example this party can be represented by the clients of this developers, in order to be complementary to the services provided by the organization that adopts the mediation service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open Asset Services:&lt;/strong&gt; expose their services for free an access to their data or maybe more is in the hand of third-party developers, in order to increase the interaction and remove the bariers with the developers communitys&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After reading and searching in the field of monetization and planning phase i started reading more about how we can develop a webAPI , its here where i learned a lot about API no-code platforms, API integration platforms, openAPI…., and alot of APIs development leaders like Mulesoft, WSO2, and Apigee which provide alot of documentation about APIs design, management and more, their documentation was very valuable for me, i will summarize and talk about what i learned from each resource and each APIs provider in the next posts.&lt;/p&gt;

&lt;p&gt;Another thing I’d like to mention is that we’ve all heard of code smells, but have you ever heard of test smells? I first heard about test smells five months ago while looking for a scientific paper discussing how to do unit tests², tests smell this was not the only new topic I learned about; there are many others as well, for example, TDD, Continuous development, Continuous integration and the difference between CI and feature branching and how to apply this two properly from the book⁷, blog³ and the videos⁴ of Dave Farely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;this post was just an introduction to along series about what i learned about web apis, and as I mentioned at the beginning, I wanted to write and summarize what I learned in my notebook or google docs, but I thought that sharing and summarizing what i learned on the Internet would be better, as I might get more suggestions or some criticism that would be a reason for me to learn new things.&lt;/p&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;p&gt;[1] Fostering Value Creation with Digital Platforms: A Unified Theory of the Application Programming Interface Design by Jochen Wulf &amp;amp; Ivo Blohm&lt;/p&gt;

&lt;p&gt;[2] tsDetect: an open source test smells detection tool by &lt;em&gt;Anthony Peruma et al&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;[3] &lt;a href="https://www.davefarley.net/"&gt;Dave Farley’s Weblog&lt;/a&gt; ,&lt;/p&gt;

&lt;p&gt;[4] &lt;a href="https://www.youtube.com/channel/UCCfqyGl3nq_V0bo64CjZh8g"&gt;https://www.youtube.com/channel/UCCfqyGl3nq_V0bo64CjZh8g&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[7] &lt;a href="https://www.amazon.com/Continuous-Delivery-Pipelines-Better-Software/dp/B096TTQHYM"&gt;Fostering Value Creation with Digital Platforms: A Unified Theory of the Application Programming Interface Design by Jochen Wulf &amp;amp; Ivo Blohm&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Serverless Application with NestJS and the Serverless Framework: Authentication and a…</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Fri, 21 Jul 2023 19:09:15 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-authentication-and-a-586n</link>
      <guid>https://forem.com/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-authentication-and-a-586n</guid>
      <description>&lt;h3&gt;
  
  
  Building a Serverless Application with NestJS and the Serverless Framework: Authentication and a custom lambda Authorizer.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F641%2F1%2ATsyQCFhEPMAYtdYUd9RgWQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F641%2F1%2ATsyQCFhEPMAYtdYUd9RgWQ.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image for the blog building a Serverless Application with NestJS and the Serverless Framework: Authentication and a custom lambda Authorizer&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Table of contents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intro&lt;/li&gt;
&lt;li&gt;Handling authentication using lambda functions&lt;/li&gt;
&lt;li&gt;Enabling CORS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the previous post, I talked about the following :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to setup the app following a mono repo approach &lt;a href="https://dev.to/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-a-monorepo-approach-5ap4"&gt;https://medium.com/@ezpzdev/building-a-serverless-application-with-nestjs-and-the-serverless-framework-a-monorepo-approach-5460bb86a45.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Adding API portal &lt;a href="https://medium.com/@ezpzdev/building-a-serverless-application-with-nestjs-and-the-serverless-framework-a-monorepo-approach-5460bb86a45" rel="noopener noreferrer"&gt;https://ezpzdev.medium.com/building-a-serverless-application-with-nestjs-and-the-serverless-framework-api-portal-76f6bee8a8e.&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;after the last blog here is what our file structure looks like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
  users
    src
      app.controller.ts
      app.module.ts
      app.service.ts
      main.ts
      serverless.yaml
    tsconfig.app.json
  items
    src
      app.controller.ts
      app.module.ts
      app.service.ts
      main.ts
      serverless.yaml
    tsconfig.app.json
config
  serverless.yaml
nest-cli.json
serverless-compose.yaml
package.json
tsconfig.json
.eslintrc.jsya
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in this blog post I will write about how I handled JWT authentication using a lambda function&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling authentication with a lambda function
&lt;/h3&gt;

&lt;p&gt;The first thing I did was to search in the serverless framework authentication to find if there is a way to handle authentication faster with the serverless framework and this led to this blog post: &lt;a href="https://www.serverless.com/blog/strategies-implementing-user-authentication-serverless-applications/" rel="noopener noreferrer"&gt;https://www.serverless.com/blog/strategies-implementing-user-authentication-serverless-applications/&lt;/a&gt; which contains details about how to implement authentication using a lambda custom authorizer so my plane was to do the following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;check &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html" rel="noopener noreferrer"&gt;aws documentation that talks about authorization&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;write a lambda function that handles login and signup.&lt;/li&gt;
&lt;li&gt;write a lambda function that handles refreshing tokens.&lt;/li&gt;
&lt;li&gt;and writing a function that handles authorization where I will check if a request of the client contains a valid token.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;after checking the aws documentation i decided to implement a simple version of the authorizer function.&lt;/p&gt;

&lt;p&gt;the first thing that i did was to go inside my src folder and created a new folder with the name auth and added the following lines to my new yaml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: auth

plugins:
  - serverless-offline

provider:
  name: aws
  region: eu-west-3
  runtime: nodejs16.x
  stage: dev
  environment:
    DYNAMODB_TABLE: authors-${opt:stage, self:provider.stage}
    JWT_ACCESS_TOKEN_SECRET_KEY: __secret_key__
    JWT_REFRESH_TOKEN_SECRET_KEY: __refresh_token_secret_key__
    JWT_ACCESS_EXPIRES_IN_MINUTES: 30
    JWT_REFRESH_EXPIRES_IN_MINUTES: 10080
    PASSWORD_SALT_ROUNDS: 10
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:Query
        - dynamodb:Scan  
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
      Resource: 
        Fn::Join:
          - ''
          - - "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/"
            - ${self:provider.environment.DYNAMODB_TABLE}
  apiGateway:
    restApiId:
      'Fn::ImportValue': MyApiGateway-restApiId
    restApiRootResourceId:
      'Fn::ImportValue': MyApiGateway-rootResourceId

custom:
    serverless-offline:
        httpPort: 3000
        websocketPort: 3001
        lambdaPort: 3002

functions:
  signin:
    handler: dist/main.signin
    events:
      - http:
          method: POST
          path: /signin

  signup:
    handler: dist/main.signup
    events:
      - http:
          method: POST
          path: /signup
  refreshToken: 
    handler: dist/main.refreshToken
    events:
      - http:
          method: POST
          path: /refresh-token
  authorizer:
    handler: dist/main.authorizer

resources:
  Resources:
    Authorizer:
      Type: AWS::ApiGateway::Authorizer
      Properties: 
        Name: ${self:provider.stage}-Authorizer
        RestApiId: 
          'Fn::ImportValue': MyApiGateway-restApiId
        Type: TOKEN
        IdentitySource: method.request.header.Authorization
        AuthorizerResultTtlInSeconds: 300
        AuthorizerUri:
          Fn::Join:
            - ''
            - 
              - 'arn:aws:apigateway:'
              - Ref: "AWS::Region"
              - ':lambda:path/2015-03-31/functions/'
              - Fn::GetAtt: "AuthorizerLambdaFunction.Arn"
              - "/invocations"
  Outputs:
   AuthorizerId:
     Value:
       Ref: Authorizer
     Export:
       Name: authorizerId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;let’s explain more about this file, our serverless framework configuration file sets up an AWS Lambda service, called auth, intended for handling authentication in the application.&lt;/p&gt;

&lt;p&gt;The environment subsection includes environment variables such as the JWT secret keys for both access and refresh tokens, the duration of the tokens, and the DynamoDB table name, which is dynamic based on the stage (dev in this case).&lt;/p&gt;

&lt;p&gt;In the iamRoleStatements, it sets permissions for the Lambda functions to perform various actions on DynamoDB. The Resource subsection constructs the ARN (Amazon Resource Name) for the DynamoDB table.&lt;/p&gt;

&lt;p&gt;The functions the section includes different AWS Lambda functions with respective HTTP events, such as signin, signup, refreshToken, and authorizer.&lt;/p&gt;

&lt;p&gt;The authorizer the function doesn't have an events section or a dedicated path because it's not intended to be accessed directly through an HTTP request. Instead, the authorizer function is used as a custom authorizer for your API Gateway.&lt;/p&gt;

&lt;p&gt;An authorizer is a Lambda function that performs authentication and authorization on requests before they reach the actual service endpoints.&lt;/p&gt;

&lt;p&gt;When an incoming request triggers an AWS API Gateway event, the authorizer function is invoked first. This function examines the authorization token included in the request’s Authorization header and determines whether the request is allowed.&lt;/p&gt;

&lt;p&gt;The resources section of the YAML file is where you set up the Authorizer as a custom authorizer for the API Gateway. It doesn't have an HTTP endpoint because it's not meant to be invoked directly by HTTP requests but rather as an intermediate layer by the API Gateway. The AuthorizerUri property specifies the Lambda function that API Gateway calls for the custom authorization.&lt;/p&gt;

&lt;p&gt;The authorizerId output can be imported into other Serverless services users and items as the identifier of this authorizer.&lt;/p&gt;

&lt;p&gt;Now it is time to write code for our functions: signin, signup, refreshToken, and authorizer .&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Signup:&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#apps/auth/src/main.ts
export const signup = async (
  event: any,
  _context: Context,
  _callback: Callback,
) =&amp;gt; {
  const appContext = await NestFactory.createApplicationContext(AppModule);
  const appService = appContext.get(AppService);

  const { email, password, firstName, lastName } = JSON.parse(event.body);

  try {
    const result = await appService.signup(
      email,
      password,
      firstName,
      lastName,
    );
    return {
      statusCode: 201,
      body: JSON.stringify({
        success: true,
        data: result,
      }),
    };
  } catch (error) {
    console.log(error);
    return {
      statusCode: 500,
      body: JSON.stringify(error.response ?? error.message),
    };
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function receives an event, context, and callback, but also extracts the firstName and lastName fields from the request body. These are used along with the email and password to call the signup method of appService. If the sign-up process is successful, it returns a 201 status code along with the result of the operation. If an error occurs, it responds with a 500 status code and an error message.&lt;/p&gt;

&lt;p&gt;Now let’s take a look to our signup service which handles the logic that does the signup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  async signup(
    email: string,
    password: string,
    firstName: string,
    lastName: string,
  ) {
    let user = await this.getUserByEmail(email);

    if (user) {
      throw new BadRequestException('Email already in use');
    }

    const hash = await this.hash(password);
    user = await this.createUser({
      email,
      password: hash,
      firstName,
      lastName,
    });
    console.log(user);
    if (!user) {
      throw new InternalServerErrorException();
    }

    const payload = {
      email: user.email,
      firstName: user.firstName,
      lastName: user.lastName,
      sub: user.id,
    };

    const accessToken = await this.jwtService.signAsync(payload);
    const refreshToken = await this.jwtService.signAsync(payload, {
      secret: jwtConstants.refreshTokenSecret,
      expiresIn: `${jwtConstants.refreshExpiresIn} min`,
    });

    this.updateRefreshToken(user.id, refreshToken);
    return {
      access_token: accessToken,
      refresh_token: refreshToken,
    };
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;signup():&lt;/strong&gt; This method is used to create a new user account.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It first checks whether a user with the given email already exists by calling (the email must be unique for each user) getUserByEmail().&lt;/li&gt;
&lt;li&gt;If the user already exists, it throws an error. If not, it hashes the provided password using hash(), creates a new user with the hashed password, and provided details using createUser(), and creates an access token and a refresh token for this user using jwtService.signAsync().&lt;/li&gt;
&lt;li&gt;It then associates the refresh token with the user by calling updateRefreshToken(). Finally, it returns the access token and refresh token.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is the code and the explanation for the 4 helper functions used in the signup method, in these methods &lt;a href="https://www.npmjs.com/package/@nestjs/jwt" rel="noopener noreferrer"&gt;JWT (JSON Web Token) from nestjs&lt;/a&gt; is used to generate tokens, &lt;a href="https://www.npmjs.com/package/bcrypt" rel="noopener noreferrer"&gt;bcrypt&lt;/a&gt; is used for hashing passwords, and&lt;a href="https://www.npmjs.com/package/aws-sdk" rel="noopener noreferrer"&gt;AWS DynamoDB from aws-sdk&lt;/a&gt; is used for storing user information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  private async updateRefreshToken(id: string, refreshToken: string) {
    const params = {
      TableName: process.env.DYNAMODB_TABLE,
      Key: { id },
      UpdateExpression: 'set refreshToken = :refreshToken',
      ExpressionAttributeValues: {
        ':refreshToken': refreshToken,
      },
    };
    const res = await this.db.update(params).promise();
    if (res.$response.error) {
      throw new InternalServerErrorException(res.$response.error.message);
    }
  }

  private async hash(password: string) {
    const salt = await bcrypt.genSalt(jwtConstants.saltRounds);
    return await bcrypt.hash(password, salt);
  }
  private async getUserByEmail(email: string) {
    const params = {
      TableName: process.env.DYNAMODB_TABLE,
      FilterExpression: 'email = :email',
      ExpressionAttributeValues: {
        ':email': email,
      },
      ProjectionExpression: 'id, email, firstName, lastName, password',
    };

    const res = await this.db.scan(params).promise();
    if (res.$response.error) {
      throw new InternalServerErrorException(res.$response.error);
    }
    console.log(res.Items[0]);
    return res.Items[0];
  }

  private async createUser(user: any) {
    const { email, firstName, lastName, password } = user;
    const id = crypto.randomUUID();
    const data = {
      TableName: process.env.DYNAMODB_TABLE,
      Item: {
        id: id,
        email,
        firstName,
        lastName,
        password,
      },
    };

    const res = await this.db
      .put({
        ...data,
      })
      .promise();
    if (res.$response.error) {
      throw new InternalServerErrorException(res.$response.error.message);
    }

    return { id, email, firstName, lastName };
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;getUserByEmail(): This private method queries the DynamoDB table to find a user with the given email. It returns the first user that matches the email, if any. If there is an error with the scan operation, it throws an error.&lt;/li&gt;
&lt;li&gt;createUser(): This private method adds a new user to the DynamoDB table. The user’s id is randomly generated, and the provided email, first name, last name, and password are stored in the table. If there is an error with the put operation, it throws an error.&lt;/li&gt;
&lt;li&gt;updateRefreshToken(): This private method updates a user’s refresh token in the DynamoDB table. If there is an error with the update operation, it throws an error.&lt;/li&gt;
&lt;li&gt;hash(): This private method hashes a password using bcrypt. It first generates a salt using bcrypt.genSalt() with the number of salt rounds defined in jwtConstants, then hashes the password with this salt using bcrypt.hash().&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Signin:
&lt;/h4&gt;

&lt;p&gt;Now as we are done with the signup method and all the methods that we use in it, it is time to move to signin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#apps/auth/src/main.ts

export const signin = async (
  event: any,
  _context: Context,
  _callback: Callback,
) =&amp;gt; {
  const appContext = await NestFactory.createApplicationContext(AppModule);
  const appService = appContext.get(AppService);

  const { email, password } = JSON.parse(event.body);

  try {
    const result = await appService.signIn(email, password);
    return {
      statusCode: 200,
      body: JSON.stringify({
        success: true,
        data: result,
      }),
    };
  } catch (error) {
    console.log(error);
    return {
      statusCode: 401,
      body: JSON.stringify(error.response ?? error.message),
    };
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This signin function accepts an event, context, and callback. It extracts the email and password from the request body. These are utilized when invoking the signIn method of appService. If the sign-in procedure is successful, a 200 status code and the result of the operation are returned. Should an error occur, it responds with a 401 status code and an error message.&lt;/p&gt;

&lt;p&gt;Now let’s take a look at our signin service which handle the logic that does the signin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  async signIn(email: string, pass: string) {
    const user = await this.getUserByEmail(email);

    if (!user) {
      throw new BadRequestException('Wrong credentials');
    }

    const match = await bcrypt.compare(pass, user?.password);

    if (!match) {
      throw new BadRequestException('Wrong credentials');
    }

    const payload = {
      email: user.email,
      firstName: user.firstName,
      lastName: user.lastName,
      sub: user.id,
    };

    const accessToken = await this.jwtService.signAsync(payload);
    const refreshToken = await this.jwtService.signAsync(payload, {
      secret: jwtConstants.refreshTokenSecret,
      expiresIn: `${jwtConstants.refreshExpiresIn} min`,
    });

    this.updateRefreshToken(user.id, refreshToken);
    return {
      access_token: accessToken,
      refresh_token: refreshToken,
    };
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function receives the email and password (pass) as parameters. and do the following&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It retrieves the user’s details from the database using the getUserByEmail method.&lt;/li&gt;
&lt;li&gt;If there’s no user found with the provided email, it throws a BadRequestException error with the message 'Wrong credentials'.&lt;/li&gt;
&lt;li&gt;Next, it checks whether the provided password matches the user’s password stored in the database. The bcrypt.compare method is used to compare the hashed version of the input password with the hashed version stored in the database.&lt;/li&gt;
&lt;li&gt;If the passwords don’t match, it throws an BadRequestException error with the message 'Wrong credentials'.&lt;/li&gt;
&lt;li&gt;If the user is found and the passwords match, it prepares a payload with the user’s details.&lt;/li&gt;
&lt;li&gt;The method then creates a JWT access token and a refresh token using the jwtService.signAsync method. The payload is used as the data for these tokens. The refresh token has a different secret and expires after a defined amount of time.&lt;/li&gt;
&lt;li&gt;The updateRefreshToken the method is called to associate the new refresh token with the user in the database.&lt;/li&gt;
&lt;li&gt;Finally, the method returns both the access token and the refresh token. These can be used by the client application to make authenticated requests and renew the access token when it expires, respectively.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Refresh Token:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let’s write the lambda function that takes of refreshing token when expired.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const refreshToken = async (
  event: any,
  _context: Context,
  _callback: Callback,
) =&amp;gt; {
  const appContext = await NestFactory.createApplicationContext(AppModule);
  const appService = appContext.get(AppService);

  const { refreshToken } = JSON.parse(event.body);

  try {
    const result = await appService.refreshToken(refreshToken);
    return {
      statusCode: 200,
      body: JSON.stringify({
        success: true,
        data: result,
      }),
    };
  } catch (error) {
    console.log(error);
    return {
      statusCode: 403,
      body: JSON.stringify(error.response ?? error.message),
    };
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This refreshToken function takes an event, context, and callback as inputs. From the request body, it extracts the refreshToken. This is then used when calling the refreshToken method of appService. If the refresh operation is successful, a 200 status code and the result of the process are returned. This would typically be a new access token. However, if an error occurs, the function responds with a 403 status code and the respective error message. The error could occur for various reasons, such as the provided refreshToken is invalid or expired.&lt;/p&gt;

&lt;p&gt;Now let’s take a look at our refreshToken service which handle the logic that refreshes the token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  async refreshToken(refreshToken: string) {
    let user = undefined;
    try {
      const payload = await this.jwtService.verifyAsync(refreshToken, {
        secret: jwtConstants.refreshTokenSecret,
      });
      user = payload;
    } catch (e) {
      console.log(e);
      throw new InternalServerErrorException(
        'Error while validating refresh token',
      );
    }

    const user = await this.getUserByEmail(user.email);
    if (!user) {
      throw new BadRequestException('Invalid refresh token');
    }

    if (user.refreshToken &amp;amp;&amp;amp; user.refreshToken !== refreshToken) {
      console.log('refresh token does not match.');
      throw new BadRequestException('Invalid refresh token');
    }

    const payload = {
      email: user.email,
      firstName: user.firstName,
      lastName: user.lastName,
      sub: user.id,
    };

    return {
      access_token: await this.jwtService.signAsync(payload),
    };
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The refreshToken function accepts the refreshToken as a parameter and proceeds as follows:&lt;/p&gt;

&lt;p&gt;1- It attempts to verify the refreshToken using the jwtService.verifyAsync method. If there's an issue with the verification, an error is logged, and it throws an InternalServerErrorException with the message 'Error while validating refresh token'.&lt;/p&gt;

&lt;p&gt;2- Once the refreshToken is verified, the payload (which contains the user's information) is extracted from the refreshToken and used to retrieve the user's data from the database using the getUserByEmail method.&lt;/p&gt;

&lt;p&gt;3- If no user is found, or the refreshToken stored in the database for the user does not match the provided refreshToken, it throws a BadRequestException with the message 'Invalid refresh token'.&lt;/p&gt;

&lt;p&gt;4- If the user exists and the refreshToken is valid, it prepares a new payload with the user's details. The payload includes the user's email, first name, last name, and id.&lt;/p&gt;

&lt;p&gt;5- Finally, it generates a new JWT access token using the jwtService.signAsync method, with the payload as the data for this token and returns this new access token. The client application can use this new access token for further authenticated requests.&lt;/p&gt;

&lt;p&gt;This process helps ensure that the user is still valid and has the right to access the resources, even when the access token is expired but the refresh token is still valid. This reduces the need for the user to provide their credentials again, improving the user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorizer:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the main subject that we are going to talk about in this post which is writing an authorizer that going to be used in the other services and in our API gateway to check if a request is valid or not.&lt;/p&gt;

&lt;p&gt;as a reminder, we already created the required configuration in our serverless.yaml file , below is the most important parts that your file have to make the authorizer work with the API gateway and with other services and lambda function in the different serverless.yaml files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: auth

provider:
  ...
  apiGateway:
    restApiId:
      'Fn::ImportValue': MyApiGateway-restApiId
    restApiRootResourceId:
      'Fn::ImportValue': MyApiGateway-rootResourceId

...

functions:
  ...
  authorizer:
    handler: dist/main.authorizer

resources:
  Resources:
    Authorizer:
      Type: AWS::ApiGateway::Authorizer
      Properties: 
        Name: ${self:provider.stage}-Authorizer
        RestApiId: 
          'Fn::ImportValue': MyApiGateway-restApiId
        Type: TOKEN
        IdentitySource: method.request.header.Authorization
        AuthorizerResultTtlInSeconds: 300
        AuthorizerUri:
          Fn::Join:
            - ''
            - 
              - 'arn:aws:apigateway:'
              - Ref: "AWS::Region"
              - ':lambda:path/2015-03-31/functions/'
              - Fn::GetAtt: "AuthorizerLambdaFunction.Arn"
              - "/invocations"
  Outputs:
   AuthorizerId:
     Value:
       Ref: Authorizer
     Export:
       Name: authorizerId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can go back to our auth/main.ts and write some code for our authorizer, the provided code is an AWS Lambda function that serves as an “authorizer” in the context of AWS API Gateway. It will be invoked before your actual business logic function to verify that the incoming request has the necessary permissions to perform the intended action, let’s explain more about the function&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The function then retrieves the accessToken from the event.authorizationToken property by removing the 'Bearer' prefix from it.&lt;/li&gt;
&lt;li&gt;It also extracts the methodArn from the event.methodArn. methodArn is the Amazon Resource Name (ARN) of the incoming request. This is an identifier that AWS uses to identify individual resources. It represents the requested resource (like an API method) to be accessed.&lt;/li&gt;
&lt;li&gt;It then calls the authorizer method from AppService bypassing accessToken and methodArn. This function should return a policy document that tells API Gateway what resources this token is allowed to access.&lt;/li&gt;
&lt;li&gt;If the authorizer function is successful and the user is authorized, it returns a policy document by calling the callback function with null as the first parameter and authorized as the second parameter.&lt;/li&gt;
&lt;li&gt;If an error occurs, it calls the callback function with null as the first parameter and the error response as the second.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;now we can explain why are we using methodArn , and callback in the authorizer :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The reason for extracting the methodArn is to generate a policy document specific to the API method being accessed. A user may have different permissions for different API methods. The methodArn helps us identify which API method the user is trying to access so that we can generate the correct policy document.&lt;/li&gt;
&lt;li&gt;The callback function is part of the AWS Lambda handler. In an AWS Lambda function, the callback function is used to signal the end of the function’s execution and return a response to the service that invoked the Lambda function. It’s essential to call the callback function once you’re done with your processing, or AWS Lambda will continue to wait until the function execution times out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s dive deeper and discover our authorizer service and explain more about how the authorizer logic work, below is the authorizer service and the other two helper functions used in it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  async authorizer(accessToken: string, methodArn: any) {
    if (!accessToken || !methodArn)
      return this.generateAuthResponse('None', 'Deny', 'None');

    // verifies token
    const decoded = await this.verifyToken(accessToken);
    if (decoded &amp;amp;&amp;amp; decoded.sub) {
      return this.generateAuthResponse(decoded.sub, 'Allow', methodArn);
    } else {
      return this.generateAuthResponse('None', 'Deny', methodArn);
    }
  }

  async verifyToken(accessToken: string) {
    console.log('verify token', accessToken);
    try {
      return await this.jwtService.verifyAsync(accessToken);
    } catch (e) {
      throw new BadRequestException('Invalid access token');
    }
  }

  private generateAuthResponse(principalId, effect, methodArn) {
    const policyDocument = this.generatePolicyDocument(effect, methodArn);

    return {
      principalId,
      policyDocument,
    };
  }

  private generatePolicyDocument(effect, methodArn) {
    if (!effect || !methodArn) return null;

    const policyDocument = {
      Version: '2012-10-17',
      Statement: [
        {
          Action: 'execute-api:Invoke',
          Effect: effect,
          Resource: methodArn,
        },
      ],
    };

    return policyDocument;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;authorizer(accessToken: string, methodArn: any): This function is the authorizer for the API. It checks if an accessToken and methodArn (a unique identifier for the method to be authorized) are provided. If either is missing, it generates an authorization response denying access. If both are provided, it verifies the token. If the token is valid and contains a sub (subject) field, it generates an authorization response allowing access to the methodArn. If the token is not valid, it generates a response denying access.&lt;/li&gt;
&lt;li&gt;generateAuthResponse(principalId, effect, methodArn): This function generates an authorization response. It creates a policy document (using generatePolicyDocument(effect, methodArn)) and combines it with the principalId to form the response. The principalId is the identifier of the user or role for which the policy is being created.&lt;/li&gt;
&lt;li&gt;generatePolicyDocument(effect, methodArn): This function generates a policy document. A policy document is a structured policy that AWS uses to evaluate whether to allow or deny access to a specific AWS resource. This function accepts an effect (either 'Allow' or 'Deny') and a methodArn and constructs a policy document from them&lt;/li&gt;
&lt;li&gt;verifyToken(accessToken: string): This function verifies if the provided accessToken is valid. It uses the jwtService.verifyAsync method to decode the token and confirm its legitimacy. If the token is invalid, it throws a BadRequestException error with the message 'Invalid access token'.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For more details about method arn, callback, and generating policy documents check the links below :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/arn-format-reference.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/arn-format-reference.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enabling CORS
&lt;/h3&gt;

&lt;p&gt;In order for our authorizer to work we need to import it in our &lt;code&gt;serverless.yaml&lt;/code&gt; file and enable CORS for the function that accepts PUT and POST requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const updateItem: Handler = async (
  event: any,
  _context: Context,
  _callback: Callback,
) =&amp;gt; {
  const appContext = await NestFactory.createApplicationContext(ItemsModule);
  const appService = appContext.get(ItemsService);
  const token = appService.extractTokenFromHeader(event);
  const { id } = event.pathParameters;
  const { name, description } = JSON.parse(event.body);

  try {
    const res = await appService.updateItem(
      id,
      {
        name,
        description,
      },
      token,
    );
    return {
      statusCode: HttpStatus.OK,
      body: JSON.stringify(res),
      headers: {
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Methods': '*',
        'Access-Control-Allow-Credentials': true,
      },
    };
  } catch (error) {
    console.log(error);
    return {
      statusCode: HttpStatus.BAD_REQUEST,
      body: JSON.stringify(error.response ?? error.message),
      headers: {
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Methods': '*',
        'Access-Control-Allow-Credentials': true,
      },
    };
  }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One key thing to note in this function is the headers object in the response. This is related to the concept of Cross-Origin Resource Sharing (CORS). CORS is a mechanism that uses additional HTTP headers to tell browsers to give a web application running at one origin, access to selected resources from a different origin.&lt;/p&gt;

&lt;p&gt;In this code, the headers 'Access-Control-Allow-Origin': '*' and 'Access-Control-Allow-Methods': '*' are set to ' * ' which means all origins and all methods are allowed. In a production environment, it's typically recommended to restrict this to only the origins and methods that need to be used for security reasons. For example, you could restrict the origins to '&lt;a href="https://yourwebsite.com" rel="noopener noreferrer"&gt;https://yourwebsite.com&lt;/a&gt;' and the methods to 'GET, POST' as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'Access-Control-Allow-Origin': 'https://yourwebsite.com',
'Access-Control-Allow-Methods': 'GET, POST'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s a quick overview of the code. As always, keep in mind to secure your applications by not exposing sensitive data in error messages and by implementing proper authentication and authorization mechanisms.&lt;/p&gt;

&lt;p&gt;Now if you try to run npm run deploy and then check your aws api portal you will find this authorizer, trying to access the updateItem function will throw an error until you provide a valid access token.&lt;/p&gt;

&lt;p&gt;That’s all for this post! I’ve shared my journey on learning to build a basic API using NestJS, AWS, and the Serverless Framework. I plan to write more about my experiences with these tools in future posts. I hope you find this helpful, and maybe even learn something new!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Serverless Application with NestJS and the Serverless Framework: API portal</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Sat, 15 Jul 2023 23:15:21 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-api-portal-2ff</link>
      <guid>https://forem.com/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-api-portal-2ff</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XmV-gZTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/641/0%2AuKoMv7tpa_aBD_m_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XmV-gZTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/641/0%2AuKoMv7tpa_aBD_m_.png" alt="Header image for Building a Serverless Application with NestJS and the Serverless Framework: API portal " width="641" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table of Contents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Using A One API Portal&lt;/li&gt;
&lt;li&gt;Th end&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In my last blog post, we walked through the process of building a straightforward application with the Serverless Framework and NestJS. Today, we’re going to take a step further. Our primary focus will be on having all apps under a single AWS API Gateway, as opposed to each having its separate gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A one API Portal:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As we said each of 3 apps was related to its own API portal what i first did is to go to the serverless documentation and I found this &lt;a href="https://www.serverless.com/framework/docs/providers/aws/events/apigateway#easiest-and-cicd-friendly-example-of-using-shared-api-gateway-and-api-resources"&gt;post&lt;/a&gt; talking about how to create a configuration for an API portal and expose it as an output for the other apps and serverless files to use.&lt;/p&gt;

&lt;p&gt;So here is what I did, first I created a configuration folder configand added a new serverless file to it so my files structure now looks like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
  users
    src
      app.controller.ts
      app.module.ts
      app.service.ts
      main.ts
      serverless.yaml
    tsconfig.app.json
  items
    src
      app.controller.ts
      app.module.ts
      app.service.ts
      main.ts
      serverless.yaml
    tsconfig.app.json
config
  serverless.yaml
nest-cli.json
serverless-compose.yaml
package.json
tsconfig.json
.eslintrc.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also added this config folder to the serverless-compose.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  users:
    path: apps/users
    dependsOn: config
  items:
    path: apps/items
    dependsOn: config
  config:
    path: config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After done with the file configuration part i moved back to my config/serverless.yaml file and added the following code to it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: config

provider:
  name: aws
  runtime: nodejs16.x
  stage: dev
  region: eu-west-3

resources:
  Resources:
    MyApiGW:
      Type: AWS::ApiGateway::RestApi
      Properties:
        Name: MyApiGW
  Outputs:
    apiGatewayRestApiId:
      Value:
        Ref: MyApiGW
      Export:
        Name: MyApiGateway-restApiId

    apiGatewayRestApiRootResourceId:
      Value:
        Fn::GetAtt:
          - MyApiGW
          - RootResourceId
      Export:
        Name: MyApiGateway-rootResourceId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s break down this file and explain each line of it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;service: config: Here, we're simply naming our service "config".&lt;/li&gt;
&lt;li&gt;provider:: This section gives the details about our cloud service provider. We specify we're using Amazon Web Services (name: aws), with Node.js 16.x as the runtime environment (runtime: nodejs16.x). We also denote we're working in the development stage (stage: dev) in the region 'eu-west-3' (region: eu-west-3).&lt;/li&gt;
&lt;li&gt;resources:: In this block, we define the AWS resources used in our service.&lt;/li&gt;
&lt;li&gt;MyApiGW:: We name our API Gateway "MyApiGW".&lt;/li&gt;
&lt;li&gt;Type: AWS::ApiGateway::RestApi: We specify we're creating a RESTful API using AWS's API Gateway.&lt;/li&gt;
&lt;li&gt;Properties: Name: MyApiGW: We set the name of the API Gateway under properties.&lt;/li&gt;
&lt;li&gt;Outputs:: This part is where we define the output values from our serverless service. These are useful for cross-stack resource sharing or for sharing references with other services.&lt;/li&gt;
&lt;li&gt;apiGatewayRestApiId:: We're exporting the ID of the API Gateway here. The value is a reference to the API Gateway we defined earlier (Ref: MyApiGW). The exported output is named "MyApiGateway-restApiId" (Export: Name: MyApiGateway-restApiId).&lt;/li&gt;
&lt;li&gt;apiGatewayRestApiRootResourceId:: Similarly, we're exporting the Root Resource ID of the API Gateway. The value is the "RootResourceId" attribute from the API Gateway (Fn::GetAtt: - MyApiGW - RootResourceId). The exported output is named "MyApiGateway-rootResourceId" (Export: Name: MyApiGateway-rootResourceId).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more details about this make sure to check :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/framework/docs/providers/aws/events/apigateway"&gt;Serverless API Gateway V1 documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;There is also another way to achieve the same results by following &lt;a href="https://www.serverless.com/framework/docs/providers/aws/events/http-api"&gt;Serverless API Gateway V2 documentation&lt;/a&gt;, API Gateway V2 is faster and cheaper than V1 according to the documentation.&lt;/li&gt;
&lt;li&gt;Also, check &lt;a href="https://aws.amazon.com/api-gateway/"&gt;this link&lt;/a&gt; for a simple explanation.&lt;/li&gt;
&lt;li&gt;And &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html"&gt;this one&lt;/a&gt; contains better documentation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have now successfully set up our configuration file. The next step involves instructing our two applications users and items to use the API Gateway that we’ve established in our configuration folder. Let’s proceed with that&lt;/p&gt;

&lt;p&gt;Now that we’re all set with the configuration, let’s navigate to users/serverless.yaml and items/serverless.yaml Within the provider section of these files, we’ll add the following lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# users/serverless.yaml and items/serverless.yaml
provider:
   ....
  apiGateway:
      restApiId:
        'Fn::ImportValue': MyApiGateway-restApiId
      restApiRootResourceId:
        'Fn::ImportValue': MyApiGateway-rootResourceId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s explain the simple changes that we added to the two files :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;provider.apiGateway.restApiId - This tells Serverless Framework to use an existing AWS API Gateway, instead of creating a new one for this service. The existing API Gateway's ID is provided by importing the value from CloudFormation's output named "MyApiGateway-restApiId". This is useful when you want to manage the API Gateway separately from the Serverless application or share it among multiple Serverless applications.&lt;/li&gt;
&lt;li&gt;provider.apiGateway.restApiRootResourceId - Similarly, this tells Serverless Framework to use an existing root resource in the AWS API Gateway, instead of creating a new one. The existing root resource's ID is provided by importing the value from CloudFormation's output named "MyApiGateway-rootResourceId". This is again useful when the root resource is managed separately or shared among multiple Serverless applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Fn::ImportValue is a CloudFormation intrinsic function used to import values exported by other stacks in the same AWS account and region. This allows for cross-stack references, sharing resources and values among different CloudFormation stacks, which can be useful for managing larger architectures or when it is beneficial to have separate stacks for different concerns.&lt;/p&gt;

&lt;p&gt;For a more comprehensive understanding of CloudFormation’s intrinsic functions and stacks, I highly recommend diving into the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html"&gt;AWS CloudFormation stack updates&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html"&gt;Intrinsic function reference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;now we simply need to run :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which was already added to our package.json file in the root directory of the app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
  "build:users": "nest build --tsc users",
  "build:items": "nest build --tsc items",
  "build:all": "npm run build:users &amp;amp;&amp;amp; npm run build:items",
  ...
  "deploy": "npm install &amp;amp;&amp;amp; npm run build:all &amp;amp;&amp;amp; npx serverless deploy",
  ...
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following these steps, you can navigate to your AWS console and locate API Gateway. There, you will discover a single, centralized API Gateway managing all of your application links. Its interface should resemble the screenshot provided below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F-OIAx2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AqG4b92y_D8kp-mm2q_tWKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F-OIAx2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AqG4b92y_D8kp-mm2q_tWKw.png" alt="an Image that show how the api portal will look like after folloàwing the required steps" width="800" height="359"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;An illustrative screenshot showcasing a custom API Gateway, constructed with the method followed in this blog post, utilizing NestJS and the Serverless Framework.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So instead of seeing the links in the dashboard, you will see an items and books links.&lt;/p&gt;

&lt;p&gt;In summary, here’s a concise roadmap to consolidate our multiple apps under a single API gateway:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Construct a configuration file containing the YAML setup for the API gateway, and ensure it exports the &lt;strong&gt;apiGatewayRestApiId&lt;/strong&gt; and &lt;strong&gt;apiGatewayRestApiRootResourceId&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Incorporate these two exported values into the provider section of all our application-specific &lt;strong&gt;serverless.yaml&lt;/strong&gt; files.&lt;/li&gt;
&lt;li&gt;Deploy the app and validate our unified API gateway via the AWS console.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The end
&lt;/h3&gt;

&lt;p&gt;I hope this post has offered some insight into my learning journey and the steps I’ve taken to create an app using NestJS and Serverless Framework. If you’ve made it this far, I want to extend my sincere gratitude for your time and interest.&lt;/p&gt;

&lt;p&gt;Your feedback means a lot to me. If you have any suggestions or simply want to share your own experiences, I invite you to leave a comment or get in touch.&lt;/p&gt;

&lt;p&gt;Stay tuned for my upcoming post, where i will explore adding authentication to the application. Thank you.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Concise Guide to Utilizing HashiCorp Vault in Production</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Mon, 26 Jun 2023 22:53:28 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/a-concise-guide-to-utilizing-hashicorp-vault-in-production-1l44</link>
      <guid>https://forem.com/ezpzdevelopement/a-concise-guide-to-utilizing-hashicorp-vault-in-production-1l44</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArTyEDKqJn7hh6Czh4FpTEQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArTyEDKqJn7hh6Czh4FpTEQ.png" alt="thumbnail for the post A Concise Guide to Utilizing HashiCorp Vault in Production"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In today’s digital landscape, the need to protect sensitive information has become paramount. Secrets management is the practice of securely storing, managing and distributing critical data such as passwords, API keys, database credentials, and encryption keys. By implementing robust practices and utilizing specialized tools, secrets management aims to safeguard these secrets from unauthorized access and misuse.&lt;/p&gt;

&lt;h4&gt;
  
  
  HashiCorp Vault
&lt;/h4&gt;

&lt;p&gt;HashiCorp Vault is a popular open-source tool designed for secure secrets management and data protection. It offers a comprehensive solution for storing, accessing, and managing sensitive information, such as passwords, API keys, certificates, and encryption keys.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Features and Capabilities
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F402%2F1%2AuGoNfUyVFb9K0mEDF3RPdw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F402%2F1%2AuGoNfUyVFb9K0mEDF3RPdw.gif" alt="Key features capabilities welcome image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Secrets Management&lt;/strong&gt; : Vault provides a central repository for securely storing secrets, eliminating the need to store sensitive information in configuration files or source code. This centralized approach improves security by reducing the risk of accidental exposure or unauthorized access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Storage:&lt;/strong&gt; Vault encrypts secrets at rest using industry-standard encryption algorithms. It ensures that sensitive information remains encrypted when stored in backend storage systems, adding an extra layer of protection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Secrets:&lt;/strong&gt; Vault can generate dynamic secrets on-demand for various resources such as databases, cloud providers, and more. These dynamic secrets have short lifetimes and are automatically revoked after a certain period or when no longer needed. This approach minimizes the risk of credentials being compromised or misused.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditing and Logging&lt;/strong&gt; : Vault keeps detailed logs of all operations, including access attempts, secret retrieval, and modification. These audit logs enable organizations to track and monitor secret usage, aiding in compliance efforts and troubleshooting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration and Extensibility&lt;/strong&gt; : Vault integrates with various authentication systems, identity providers, and cloud platforms. It provides a flexible API and SDKs for developers to build custom integrations and automate secrets management processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encryption as a Service&lt;/strong&gt; : Vault provides encryption as a service, allowing users to encrypt and decrypt data using various encryption algorithms. This feature ensures the confidentiality and integrity of secrets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Secrets Revocation&lt;/strong&gt; : Vault supports automatic revocation of dynamic secrets after a predefined period or when no longer needed. This helps mitigate the risk of unauthorized access to secrets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Encryption and Decryption&lt;/strong&gt; : Vault allows users to encrypt and decrypt secrets using encryption keys managed within the system. This feature ensures that secrets are protected and can be securely transmitted across different systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Common Use Cases For Vault
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database Credential Management&lt;/strong&gt; : Vault can securely store and manage credentials for databases, such as usernames, passwords, and connection strings. Applications can retrieve these credentials dynamically from Vault, reducing the risk of hardcoded credentials and improving security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key Management:&lt;/strong&gt; Vault can be used to store and distribute API keys securely. By centralizing API key management in Vault, organizations can enforce access control, monitor usage, and rotate keys when needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encryption Key Management&lt;/strong&gt; : Vault can generate, store, and manage encryption keys used for data encryption. This ensures that encryption keys are securely stored and protected, reducing the risk of unauthorized access to sensitive data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Management:&lt;/strong&gt; Vault can generate and manage tokens used for authentication and authorization. It provides fine-grained control over token creation, revocation, and expiration, allowing organizations to enforce security policies and manage access to resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Key Management&lt;/strong&gt; : Vault can act as an SSH key manager, providing a secure repository for storing and distributing SSH keys. This centralizes the management of SSH keys, simplifying access control and auditing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Vault Installation
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Note : Please note that the installation instructions provided below are specific to Linux. If you are using a different operating system, we recommend referring to the official &lt;a href="https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install" rel="noopener noreferrer"&gt;&lt;strong&gt;documentation&lt;/strong&gt;&lt;/a&gt; for the appropriate installation steps&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using the Debian package repository, HashiCorp provides a variety of &lt;a href="https://www.hashicorp.com/official-release-channels" rel="noopener noreferrer"&gt;Official Release Channels&lt;/a&gt;. see more &lt;a href="https://www.hashicorp.com/official-packaging-guide?ajs_aid=aab3ac1b-49be-4d65-9839-38c081e8d3e0&amp;amp;product_intent=vault" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GPG is required for the package signing key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download the signing key to a new keyring&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the key’s fingerprint, add the HashiCorp repo, update and install the vault&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update

sudo apt install vault

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify if everything is all right (vault installed correctly)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault

Usage: vault &amp;lt;command&amp;gt; [args]

Common commands:
    read Read data and retrieves secrets
    write Write data, configuration, and secrets
    delete Delete secrets and configuration
    list List data or secrets
    login Authenticate locally
    agent Start a Vault agent
    server Start a Vault server
    status Print seal and HA status
    unwrap Unwrap a wrapped secret

Other commands:
    audit Interact with audit devices
    auth Interact with auth methods
    debug Runs the debug command
    kv Interact with Vault's Key-Value storage
    lease Interact with leases
    monitor Stream log messages from a Vault server
    namespace Interact with namespaces
    operator Perform operator-specific tasks
    path-help Retrieve API help for paths
    plugin Interact with Vault plugins and catalog
    policy Interact with policies
    print Prints runtime configurations
    secrets Interact with secrets engines
    ssh Initiate an SSH session
    token Interact with tokens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Vault installed, the next step is to start a Vault server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting the Server
&lt;/h3&gt;

&lt;p&gt;you can test working with Vault with the “dev” server, which automatically unseals Vault, sets up in-memory storage, etc. (it's not covered here ) ⇒ see &lt;a href="https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-dev-server#starting-the-dev-server" rel="noopener noreferrer"&gt;&lt;strong&gt;doc&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Deploy vault in prod env&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The /var/lib/vault/data directory that the vault uses must exist. (you can choose other placements)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /var/lib/vault/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vault is configured using &lt;a href="https://github.com/hashicorp/hcl" rel="noopener noreferrer"&gt;HCL&lt;/a&gt; files&lt;/p&gt;

&lt;p&gt;Create the Vault configuration in the file config.hcl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listener "tcp" {
  address = "127.0.0.1:8200"
  tls_disable = true
}

api_addr = "http://127.0.0.1:8200"

storage "file" {
  path = "/var/lib/vault/data"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down the configuration file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;listener&lt;/strong&gt; block defines the network interface and port where Vault will listen for API requests. In this example, it's set to &lt;strong&gt;127.0.0.1:8200&lt;/strong&gt; , which means Vault will listen on the loopback address on port 8200.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tls_disable&lt;/strong&gt; is set to &lt;strong&gt;true&lt;/strong&gt; to disable TLS (Transport Layer Security) encryption for simplicity. In a production environment, it's recommended to use TLS.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;api_addr&lt;/strong&gt; configuration specifies the base URL for accessing the Vault API. In this case, it's set to &lt;strong&gt;&lt;a href="http://127.0.0.1:8200" rel="noopener noreferrer"&gt;http://127.0.0.1:8200&lt;/a&gt;&lt;/strong&gt; to match the listener address.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;storage&lt;/strong&gt; block configures the storage backend for Vault. In this example, it's set to a file storage backend with the path &lt;strong&gt;/var/lib/vault/data&lt;/strong&gt;. You can modify this path based on your needs or use a different storage backend like "consul" or "etcd" for production environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Set the -config flag to point to the proper path where you saved the configuration above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault server -config=/etc/vault/config.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Initializing the Vault
&lt;/h4&gt;

&lt;p&gt;Initialization is the process of configuring Vault.&lt;/p&gt;

&lt;p&gt;This only happens once when the server is started against a new backend that has never been used with Vault before.&lt;/p&gt;

&lt;p&gt;Launch a new terminal session, and set VAULT_ADDR environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export VAULT_ADDR='http://127.0.0.1:8200'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To initialize Vault use &lt;strong&gt;vault operator init&lt;/strong&gt;. This is an &lt;em&gt;unauthenticated&lt;/em&gt; request, but it only works on brand new Vaults without existing data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault operator init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Unseal Key 1: LhMnWcFuP/alHmtIlWTf8cp7ONbz0WFLiy28FG1IcQBK
Unseal Key 2: VU2lbINIzkovgTmHihSiTgxXUGsHnrJwLb/Q2V9FJ1Vc
Unseal Key 3: vo2lwvK8AqUHq87ASUDYoIpjO7jJnI2QJrOrQnRO9FGJ
Unseal Key 4: XMHENMUX8RPoI8AF2dIPx7/pJS+EeTywmJSHF/kt970B
Unseal Key 5: lE8hetFDRFoQ61NKJ3nNLRH/QKhD+sno5zlnKU/+z5sv

Initial Root Token: hvs.57tWqWNKEVgbyF1zPcETh4Qt

Vault initialized with 5 key shares and a key threshold of 3. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.

Vault does not store the generated root key. Without at least 3 keys to
reconstruct the root key, Vault will remain permanently sealed!

It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialization outputs two incredibly important pieces of information: the &lt;em&gt;unseal keys&lt;/em&gt; and the &lt;em&gt;initial root token&lt;/em&gt;. This is the only time ever that all of this data is known by Vault, and also the only time that the unseal keys should ever be so close together.&lt;/p&gt;

&lt;p&gt;For the purpose of this tutorial, save all of these keys somewhere, and continue. In a real deployment scenario, you would never save these keys together. Instead, you would likely use Vault’s PGP and &lt;a href="http://keybase.io/" rel="noopener noreferrer"&gt;Keybase.io&lt;/a&gt; support to encrypt each of these keys with the users’ PGP keys. This prevents one single person from having all the unseal keys. Please see the documentation &lt;a href="https://developer.hashicorp.com/vault/docs/concepts/pgp-gpg-keybase" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Seal/Unseal
&lt;/h4&gt;

&lt;p&gt;Every initialized Vault server starts in the &lt;em&gt;sealed&lt;/em&gt; state. From the configuration, Vault can access the physical storage, but it can’t read any of it because it doesn’t know how to decrypt it. The process of teaching Vault how to decrypt the data is known as &lt;em&gt;unsealing&lt;/em&gt; the Vault.&lt;/p&gt;

&lt;p&gt;Unsealing has to happen every time Vault starts. It can be done via the API and via the command line. To unseal the Vault, you must have the &lt;em&gt;threshold&lt;/em&gt; number of unseal keys. In the output above, notice that the “key threshold” is 3. This means that to unseal the Vault, you need 3 of the 5 keys that were generated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault operator unseal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To unseal the vault you must use three &lt;em&gt;different&lt;/em&gt; unseal keys, the same key repeated will not work.&lt;/p&gt;

&lt;p&gt;As you use keys, as long as they are correct, you should soon see output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.7.0
Storage Type raft
Cluster Name vault-cluster-0ba62cae
Cluster ID 7d49e5fd-a1a4-c1d1-55e2-7962e43006a1
HA Enabled true
HA Cluster n/a
HA Mode standby
Active Node Address &amp;amp;lt;none&amp;amp;gt;
Raft Committed Index 24
Raft Applied Index 24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the value for Sealed changes to false, the Vault is unsealed.&lt;/p&gt;

&lt;h4&gt;
  
  
  login
&lt;/h4&gt;

&lt;p&gt;Now authenticate as the initial root token (it was included in the output with the unseal keys).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While token-based authentication is enabled by default, Vault also supports various other authentication methods, such as username/password, LDAP, OIDC, GitHub, etc. These methods provide additional flexibility for authenticating users and applications with Vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the HTTP APIs with Authentication
&lt;/h3&gt;

&lt;p&gt;All of Vault’s capabilities are accessible via the HTTP API in addition to the CLI.&lt;/p&gt;

&lt;p&gt;Start a new Vault instance using the newly created configuration. (stop the previous instance using ctrl+c )&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault server -config=/etc/vault/config.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Launch a new terminal session, and use curl to initialize Vault with the API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl \
    --request POST \
    --data '{"secret_shares": 1, "secret_threshold": 1}' \
    http://127.0.0.1:8200/v1/sys/init | jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "keys": [
    "ff27b63de46b77faabba1f4fa6ef44c948e4d6f2ea21f960d6aab0eb0f4e1391"
  ],
  "keys_base64": [
    "/ye2PeRrd/qruh9Ppu9EyUjk1vLqIflg1qqw6w9OE5E="
  ],
  "root_token": "s.Ga5jyNq6kNfRMVQk2LY1j9iu"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This response contains your initial root token. It also includes the unseal key. You can use the unseal key to unseal the Vault and use the root token perform other requests in Vault that require authentication.&lt;/p&gt;

&lt;p&gt;To make this tutorial easy to copy-and-paste, you will be using the environment variable $VAULT_TOKEN to store the root token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export VAULT_TOKEN="s.Ga5jyNq6kNfRMVQk2LY1j9iu"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the unseal key (not the root token) from above, you can unseal the Vault via the HTTP API.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl \
    --request POST \
    --data '{"key": "/ye2PeRrd/qruh9Ppu9EyUjk1vLqIflg1qqw6w9OE5E="}' \
    http://127.0.0.1:8200/v1/sys/unseal | jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "type": "shamir",
  "initialized": true,
  "sealed": false,
  "t": 1,
  "n": 1,
  "progress": 0,
  "nonce": "",
  "version": "1.7.0",
  "migration": false,
  "cluster_name": "vault-cluster-1b34e68e",
  "cluster_id": "2cccf342-091a-b060-900b-04c29bb71ed4",
  "recovery_seal": false,
  "storage_type": "file"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can invoke the Vault API to validate the initialization status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://127.0.0.1:8200/v1/sys/init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "initialized": true }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Creating secrets
&lt;/h4&gt;

&lt;p&gt;Vault’s secret engines are components that enable the secure storage, generation, and management of secrets within Vault. They provide a way to interact with and manage different types of sensitive data, such as passwords, API keys, certificates, and more. Each secret engine has its own purpose and functionality&lt;/p&gt;

&lt;p&gt;some of the common secret engines in Vault:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key/Value (KV) Secret Engine&lt;/li&gt;
&lt;li&gt;Database Secret Engine&lt;/li&gt;
&lt;li&gt;AWS Secret Engine:&lt;/li&gt;
&lt;li&gt;PKI (Public Key Infrastructure) Secret Engine&lt;/li&gt;
&lt;li&gt;Transit Secret Engine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;for the sake of simplicity, we will go with Key/Value (KV) Secret Engine :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The KV secret engine is the most basic and versatile secret engine in Vault.&lt;/li&gt;
&lt;li&gt;It allows you to store key-value pairs of arbitrary secrets within Vault.&lt;/li&gt;
&lt;li&gt;It provides a simple CRUD (Create, Read, Update, Delete) API to interact with secrets.&lt;/li&gt;
&lt;li&gt;It is often used for storing application configuration, database credentials, or other general-purpose secrets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check the list of enabled secret engines by running the following cURL command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --header "X-Vault-Token: $VAULT_TOKEN" http://127.0.0.1:8200/v1/sys/mounts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enable the KV secret engine and mount it at the appropriate path, you can use the following cURL command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --header "X-Vault-Token: $VAULT_TOKEN" --request POST --data '{"type": "kv-v2"}' http://127.0.0.1:8200/v1/sys/mounts/secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create your first secret :&lt;/strong&gt; create a simple secret with name mysecret and object {“key”:”value”}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --header "X-Vault-Token: $VAULT_TOKEN" --request POST --data '{"data": {"key": "value"}}' http://127.0.0.1:8200/v1/secret/data/mysecret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"request_id":"ad28e5de-5eac-34bd-9252-fc9894da95cd","lease_id":"","renewable":false,"lease_duration":0,"data":{"created_time":"2023-06-23T00:27:21.7933063Z","custom_metadata":null,"deletion_time":"","destroyed":false,"version":1},"wrap_info":null,"warnings":null,"auth":null}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Retreive the created secret&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --header "X-Vault-Token: $VAULT_TOKEN" http://127.0.0.1:8200/v1/secret/data/mysecret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"request_id":"f3ec354c-cf71-b499-a8f4-90cc30d0eab6","lease_id":"","renewable":false,"lease_duration":0,"data":{"data":{"key":"value"},"metadata":{"created_time":"2023-06-23T00:27:21.7933063Z","custom_metadata":null,"deletion_time":"","destroyed":false,"version":1}},"wrap_info":null,"warnings":null,"auth":null}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F480%2F1%2AcaXHLkExh2onxBFJaJcN9w.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F480%2F1%2AcaXHLkExh2onxBFJaJcN9w.gif" alt="conclusion for A Concise Guide to Utilizing HashiCorp Vault in Production"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, this tutorial provided a straightforward and concise guide on how to initiate, initialize, and access the Harishcop vault using simple HTTP requests. By following these steps, you can easily gain access to the vault and leverage its benefits. We hope that you found this tutorial useful and that it assists you in your endeavors.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Things playing League of legends has taught me, a noob point of view</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Thu, 15 Jun 2023 22:00:32 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/things-playing-league-of-legends-has-taught-me-a-noob-point-of-view-4g1c</link>
      <guid>https://forem.com/ezpzdevelopement/things-playing-league-of-legends-has-taught-me-a-noob-point-of-view-4g1c</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DvXnYKuO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AXVzbt7JkFOdtOIvj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DvXnYKuO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AXVzbt7JkFOdtOIvj.jpg" alt="A thumbnail for Things playing League of legends has taught me, a noob point of view" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've been playing League of Legends since the end of Season 9, and guess what? I started playing &lt;a href="https://earlygame.com/lol/ranking-system"&gt;ranked&lt;/a&gt; once I reached level 30, and I got stomped, losing most of my games by 70% to 75% in that season and ending up in Iron 3.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want to know more about League of Legends, you can watch this &lt;em&gt;[_video&lt;/em&gt;](&lt;a href="https://youtu.be/u9JdENUoGik"&gt;https://youtu.be/u9JdENUoGik&lt;/a&gt;)&lt;/em&gt;._&lt;/p&gt;

&lt;p&gt;After that, I started learning about how to&lt;a href="https://leagueoflegends.fandom.com/wiki/Farming#:~:text=Farming%20is%20an%20essential%20component,as%20Creep%20Score%20(CS)."&gt;farm&lt;/a&gt;, &lt;a href="https://mobalytics.gg/blog/wave-management/"&gt;wave management&lt;/a&gt;, and &lt;a href="https://dignitas.gg/articles/positioning-tips-for-each-role-in-league-of-legends"&gt;team fight positioning&lt;/a&gt;, and I managed to get to bronze 2 after 250 games.&lt;/p&gt;

&lt;p&gt;Even after playing for 3 seasons, I'm still a hard stuck noob in bronze and silver XD, I tried my best to learn the fundamentals of these games, but I feel like I still have a lot to learn in order to get a better rank.&lt;/p&gt;

&lt;p&gt;But I'm not writing this post to tell my story with league legends, instead, I want to write about how playing this game helped me approach life with a better view and understand life and people better, and below are some points:&lt;/p&gt;

&lt;p&gt;For me, League of Legends is like a real-life simulator; you will play with different types of people in different situations, and I can say that this game will make you become the worst version of yourself during the matches.&lt;/p&gt;

&lt;p&gt;A game like this will help you learn that winning or losing is about you and not other people. If you want to win more, then you should practice and do a better job.&lt;/p&gt;

&lt;p&gt;It also helped me realize that in a lot of cases, blaming other people when they make mistakes will only make things worse, especially if you are on the same team as those people, I don’t just mean League of Legends teams when I say the same team; I mean every other team in every other aspect of our lives.&lt;/p&gt;

&lt;p&gt;There is no such myth as super-smart people getting massive results by doing little work or no work, to get good at anything, you must practice, focus, plan, and dedicate some time to it.&lt;/p&gt;

&lt;p&gt;League of Legends helped me to know more about my friends' personalities; how they think and play in the league is the same as how they think and live in real life, For me, here are the types of people that I find in a league of legends and in real life :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cool people who never get angry, always stay calm and play for fun. Even if they have a 10-game losing streak, they will keep playing, Another important thing about these people is that they have no problem playing with other players who are at a lower skill level, they just play with them and try to teach them about the game and make them a better player.&lt;/li&gt;
&lt;li&gt;Another type of people’s personality is the titled perfectionist who will get angry and lose his mind if he or one of his teammates miss played, and by miss playing I mean missing skill shots, dying in the early phases of the game, going into some stupid fights with the enemies team, or missing a smite on a dragon or a Baron, once that happens this type of players will start cursing, flaming everyone in the team, and asking them to forfeit because he doesn’t want to play anymore, personally I think that the problem with these players mentality is that they expect someone who just started playing and his rank is bronze or silver, to know everything and do no mistakes and play like a professional player, for me this type of persons are not realistic at all.&lt;/li&gt;
&lt;li&gt;Finally, there are delusional players who believe that losing or being stuck in a low rank is due to their teammates and the Riot ranking system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, in my opinion, I think that the league and those competitive games are just like any other area of life, if you stick to a plan, practice, and dedicate a good amount of time to them, you will be better.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Data Transfer Objects (DTOs): A Comprehensive Guide</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Wed, 14 Jun 2023 23:46:26 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/data-transfer-objects-dtos-a-comprehensive-guide-366p</link>
      <guid>https://forem.com/ezpzdevelopement/data-transfer-objects-dtos-a-comprehensive-guide-366p</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F618%2F1%2AXJ2N8mjcQHVAaU3NCyrXSw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F618%2F1%2AXJ2N8mjcQHVAaU3NCyrXSw.png" alt="Thumbnail image for the post Data Transfer Objects (DTOs): A Comprehensive Guide"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine 🤔✨ having a complex object or entity with countless properties and fields. But wait! In certain situations, not all of that data needs to be transferred or processed.&lt;/p&gt;

&lt;p&gt;Welcome ✨, brave adventurer, to the realm of Data Transfer Objects (DTOs) 📦! These magical constructs allow you to choose precisely the data you seek from the vast depths of complexity.&lt;/p&gt;

&lt;p&gt;Alright, let’s transition back to our world! So, what exactly are  &lt;strong&gt;DTOs&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;For instance, let’s consider an e-commerce application. When displaying a list of products on a webpage, you may only need to transmit basic information like the product name, price, and image URL. In this scenario, instead of transferring the entire product object with all its associated data (e.g., reviews, descriptions, status), you can create a &lt;strong&gt;ProductDTO&lt;/strong&gt; that includes only the essential fields for displaying the product list efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Purpose of Data Transfer Objects (DTOs) in backend development&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;DTOs are used to encapsulate and transfer data between different layers of an application :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For example, when data needs to be sent from the client (such as a web browser) to the server, or vice versa, DTOs provide a structured way to package and transfer that data. The client can create a DTO, populate it with the necessary data, and send it to the server. The server, in turn, can receive the DTO, extract the relevant information, and process it accordingly.&lt;/li&gt;
&lt;li&gt;By using DTOs, developers can define and control the data that is sent and received.&lt;/li&gt;
&lt;li&gt;Minimizing unnecessary data transfer (reducing overhead, minimize bandwidth usage, reduce latency which lead to improvment in performance).&lt;/li&gt;
&lt;li&gt;Reducing the risk of exposing sensitive information.&lt;/li&gt;
&lt;li&gt;Enable the decoupling of the API contract from the internal representations, allowing for flexibility and evolution of the backend system without impacting external consumers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Decoding DTOs: Unveiling Their Definition and Distinctions from Domain Entities and Database Models&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQobRYcH3t2TY2hwDYuiZ_A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQobRYcH3t2TY2hwDYuiZ_A.jpeg" alt="An image showing Decoding DTOs Unveiling Their Definition and Distinctions from Domain Entities and Database Models"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s break down the table:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Purpose&lt;/strong&gt; : DTOs serve the purpose of facilitating efficient data transfer and communication between different components or layers of an application. Data models represent the structure and relationships of the underlying database. Domain entities capture the business logic and behavior of the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Subset&lt;/strong&gt; : DTOs contain a subset of relevant data needed for specific use cases or communication scenarios. Data models encompass the complete set of data stored in the database. Domain entities also hold the complete set of data related to the business logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Abstraction&lt;/strong&gt; : DTOs provide a layer of abstraction, separating the internal data representations from external interactions. Data models and domain entities do not necessarily involve abstraction as they directly represent the underlying data or business logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutability&lt;/strong&gt; : DTOs can be designed as immutable objects, ensuring that the transferred data remains consistent and unmodifiable. Data models and domain entities are typically mutable and subject to modifications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transformation&lt;/strong&gt; : DTOs involve data transformation or conversion to adapt the data from one format to another, facilitating interoperability between different components or systems. Data models and domain entities do not inherently involve transformation as they represent the original data structure and business logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Creation of DTOs
&lt;/h3&gt;

&lt;p&gt;The process of creating DTO classes or structures in a backend application involves the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Identify the Data Subset&lt;/strong&gt; : Determine the specific subset of data that needs to be transferred or communicated between different parts of the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design the DTO Class/Structure&lt;/strong&gt; : Create a DTO class or structure that represents the identified subset of data. The class/structure should contain properties that mirror the fields in the subset, with appropriate data types.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Map Data to DTO&lt;/strong&gt; : Implement a mapping mechanism to populate the DTO object with data from the source. This could involve manually assigning values from the source object or using mapping libraries/tools to automate the process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use DTO for Data Transfer&lt;/strong&gt; : Pass the DTO object between different components or layers of the application, transferring the necessary data in a standardized format.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is an example of code showing the utilization of DTOs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ProductDTO {
  constructor(id, name, price, description) {
    this.id = id;
    this.name = name;
    this.price = price;
    this.description = description;
  }

  getFormattedPrice() {
    return `$${this.price.toFixed(2)}`;
  }

  static fromProductEntity(productEntity) {
    return new ProductDTO(
      productEntity.id,
      productEntity.name,
      productEntity.price,
      productEntity.description );
  }
}

// Usage
const productEntity = {
  id: 1,
  name: 'iPhone ',
  price: 999,
  description: 'The latest iPhone model.',

};

const product = ProductDTO.fromProductEntity(productEntity);
console.log(product.getFormattedPrice());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the ProductDTO class represents a product with properties for &lt;strong&gt;id&lt;/strong&gt; , &lt;strong&gt;name&lt;/strong&gt; , &lt;strong&gt;price&lt;/strong&gt; , and &lt;strong&gt;description&lt;/strong&gt;. It has a constructor that takes these properties as arguments and assigns them to the corresponding class properties.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;getFormattedPrice()&lt;/strong&gt; method is defined within the class. It returns the formatted price of the product by using the &lt;strong&gt;toFixed()&lt;/strong&gt; method to round the &lt;strong&gt;price&lt;/strong&gt; property to 2 decimal places and prepending it with the dollar sign.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;fromProductEntity()&lt;/strong&gt; method is a static method that takes a &lt;strong&gt;productEntity&lt;/strong&gt; object as an argument. It creates a new instance of the &lt;strong&gt;ProductDTO&lt;/strong&gt; class and initializes its properties with the corresponding properties from the &lt;strong&gt;productEntity&lt;/strong&gt; object.&lt;/p&gt;

&lt;p&gt;In the usage example, a &lt;strong&gt;productEntity&lt;/strong&gt; object is created with sample values for &lt;strong&gt;id&lt;/strong&gt; , &lt;strong&gt;name&lt;/strong&gt; , &lt;strong&gt;price&lt;/strong&gt; , and &lt;strong&gt;description&lt;/strong&gt;. The &lt;strong&gt;fromProductEntity()&lt;/strong&gt; method is then called to create a new &lt;strong&gt;ProductDTO&lt;/strong&gt; instance based on the &lt;strong&gt;productEntity&lt;/strong&gt;. Finally, the &lt;strong&gt;getFormattedPrice()&lt;/strong&gt; method is called on the &lt;strong&gt;product&lt;/strong&gt; object to retrieve the formatted price and log it to the console.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Mapping Techniques for DTOs and Domain Entities/Models:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are many ways /techniques for mapping , lets explore some of them&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manual&lt;/strong&gt;  &lt;strong&gt;Mapping&lt;/strong&gt; : Writing custom code to map between DTOs and domain entities/models. Provides control and customization but can be time-consuming and error-prone for complex mappings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mapping Libraries like AutoMapper&lt;/strong&gt; : Utilizing libraries like AutoMapper for automated and convention-based mapping. Reduces manual mapping code, improves readability, and simplifies maintenance. Requires initial setup and configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object-Relational Mapping (ORM) Frameworks&lt;/strong&gt;: Leveraging ORM frameworks such as Entity Framework ( &lt;strong&gt;EF&lt;/strong&gt; ) or *&lt;em&gt;Hibernate … *&lt;/em&gt;. These frameworks handle mapping between domain entities and the database. They automate mapping, handle complex scenarios, and provide additional features like lazy loading and transaction management&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Cases and Benefits (Node js example)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User Registration&lt;/strong&gt; : When a user registers in your Node.js application, you can use a UserDTO (User Data Transfer Object) to encapsulate the user’s registration data, such as username, email, and password. The UserDTO allows you to validate the data, apply any necessary transformations or sanitization, and pass it to the appropriate services for further processing.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// UserDTO.js
class UserDTO {
  constructor(username, email, password) {
    this.username = username;
    this.email = email;
    this.password = password;
  }
}

// UserService.js
class UserService {
  registerUser(userDTO) {
    // Validate and process user registration data
    // Save user to the database
    // Perform any additional operations
  }
}

// Usage in Node.js app
const userDTO = new UserDTO('john', 'john@example.com', 'password123');
const userService = new UserService();
userService.registerUser(userDTO);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Responses:&lt;/strong&gt; When responding to API requests, you can use DTOs to structure and control the data sent back to the clients. For example, you may have a ProductDTO to represent the data returned when querying a product. The ProductDTO allows you to select specific fields, exclude sensitive information, and ensure consistent data formatting across different API endpoints.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ProductDTO.js
class ProductDTO {
  constructor(id, name, price) {
    this.id = id;
    this.name = name;
    this.price = price;
  }
}

// API Endpoint
app.get('/products/:id', (req, res) =&amp;gt; {
  // Fetch product data from the database
  const product = getProductById(req.params.id);

  // Map product data to ProductDTO
  const productDTO = new ProductDTO(product.id, product.name, product.price);

  // Send the productDTO as the API response
  res.json(productDTO);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database Operations:&lt;/strong&gt; DTOs can be useful when interacting with the database in a Node.js app. For instance, you might use a DatabaseRecordDTO to represent a specific record retrieved from the database, allowing you to manipulate and transform the data before presenting it to the user or passing it to other parts of your application.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// DatabaseRecordDTO.js
class DatabaseRecordDTO {
  constructor(id, data) {
    this.id = id;
    this.data = data;
  }
}

// DatabaseService.js
class DatabaseService {
  fetchRecordById(id) {
    // Fetch record from the database
    const record = getRecordById(id);

    // Map and transform the record data to DatabaseRecordDTO
    const recordDTO = new DatabaseRecordDTO(record.id, record.data);

    // Perform further operations with the recordDTO
    // ...
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;DTO Best Practices&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In order to effectively work with DTOs and ensure their optimal usage, it is important to follow a set of best practices. These practices will help you leverage the full potential of DTOs and improve the overall quality of your code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplify DTOs:&lt;/strong&gt; Keep DTOs focused on their purpose and avoid unnecessary complexity. Limit the fields and business logic within DTOs to ensure they serve their primary role of transferring data between components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Clear Naming:&lt;/strong&gt; Choose descriptive names for DTOs and their properties to enhance code readability and understanding.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate Validation:&lt;/strong&gt; Handle data validation separately from DTOs using validation libraries or dedicated validation classes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Purpose:&lt;/strong&gt; Provide documentation for DTOs, including their purpose, usage, and any validation rules or constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework Independence:&lt;/strong&gt; Ensure that DTOs remain independent of specific frameworks or libraries for improved reusability&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, Data Transfer Objects (DTOs) play a crucial role in backend development by facilitating the transfer of data between different layers or components of an application. They allow for efficient and controlled data communication, validation, and transformation. By keeping DTOs focused, using meaningful names, and minimizing data, we can create simple and effective DTOs&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building a Serverless Application with NestJS and the Serverless Framework: A Monorepo Approach</title>
      <dc:creator>Ez Pz Developement</dc:creator>
      <pubDate>Sun, 11 Jun 2023 17:41:35 +0000</pubDate>
      <link>https://forem.com/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-a-monorepo-approach-5ap4</link>
      <guid>https://forem.com/ezpzdevelopement/building-a-serverless-application-with-nestjs-and-the-serverless-framework-a-monorepo-approach-5ap4</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F245j3xyegx1l10g7jwdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F245j3xyegx1l10g7jwdy.png" alt="modern serverless application development environment. The image represents. It reflects two advanced technologies Nestjs and serverless framework."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table of Contents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Serverless Configuration and DynamoDB Setup for Two Services&lt;/li&gt;
&lt;li&gt;Deploying to AWS&lt;/li&gt;
&lt;li&gt;Running It Offline&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This blog post explores the utilization of NestJS features in a monorepo mode to build a serverless application using a combination of AWS and the Serverless Framework. The goal is to address the challenge of combining a serverless framework with a monorepo, specifically in the context of creating multiple NestJS apps within the same repository, where each app handles the logic for different endpoints and plays a controller-like role.&lt;/p&gt;

&lt;p&gt;To get started, make sure you have Node.js 16.x installed (you can use nvm for easy version management) along with the Serverless Framework and basic knowledge of how the Serverless Framework and AWS Lambda work, as well as familiarity with DynamoDB.&lt;/p&gt;

&lt;p&gt;To install the Serverless Framework globally, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install the NestJS CLI globally by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i -g @nestjs/cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more detailed information and a brief introduction on how to get started with these tools, refer to the following links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serverless Framework: &lt;a href="https://www.serverless.com/framework/docs/getting-started" rel="noopener noreferrer"&gt;https://www.serverless.com/framework/docs/getting-started&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;NestJS: &lt;a href="https://docs.nestjs.com/first-steps" rel="noopener noreferrer"&gt;https://docs.nestjs.com/first-steps&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let’s dive into the coding part. Start by creating a standard NestJS application structure using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nest new users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will generate a folder structure similar to the one below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
src
  app.controller.ts
  app.module.ts
  app.service.ts
main.ts
nest-cli.json
package.json
tsconfig.json
.eslintrc.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To convert our project into a monorepo structure, use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nest generate app items
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will convert the project structure into a monorepo structure that looks like thi&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
  users
    src
      app.controller.ts
      app.module.ts
      app.service.ts
      main.ts
    tsconfig.app.json
  items
    src
      app.controller.ts
      app.module.ts
      app.service.ts
      main.ts
    tsconfig.app.json
nest-cli.json
package.json
tsconfig.json
.eslintrc.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we have set up the necessary NestJS structure for our project. Now, let’s introduce the powerful Serverless Compose file, which was recently introduced in version 3.15.0 of the Serverless Framework. The documentation &lt;a href="https://www.serverless.com/framework/docs/guides/compose" rel="noopener noreferrer"&gt;here&lt;/a&gt; provides more insights into how to compose Serverless Framework services.&lt;/p&gt;

&lt;p&gt;In the root of our project, create a new file named serverless-compose.yaml with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  items:
    path: apps/items
  users:
    path: apps/users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To deploy our services to AWS, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx serverless deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run any plugin associated with the users or items service, use the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx serverless items:&amp;lt;plugin-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, you can utilize the power of NestJS and the Serverless Framework in a monorepo structure to build your serverless application on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Configuration and DynamoDB Setup for Users and Items Services
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Serverless Configuration
&lt;/h4&gt;

&lt;p&gt;Let’s now dive into the “apps/users” directory and embark on the journey of crafting the serverless configuration for our users service. Below, you’ll find an upgraded version of the configuration file, accompanied by a more detailed technical explanation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#apps/users/serverless.yaml
service: users

plugins:
  - serverless-offline

custom:
    serverless-offline:
        httpPort: 3003
        lambdaPort: 3005

functions:
  getUser:
    handler: dist/main.getUser
    events:
      - http:
          method: GET
          path: /users/{id}
          request: 
            parameters: 
              paths: 
                id: true
  getUsers:
    handler: dist/main.getUsers
    events:
      - http:
          method: GET
          path: /users

provider:
  name: aws
  region: eu-west-3
  runtime: nodejs16.x
  stage: dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is a comprehensive breakdown of each section in the serverless configuration file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;service: This denotes the unique identifier for the service that will be created. In this instance, the service is aptly named "users".&lt;/li&gt;
&lt;li&gt;plugins: This section enumerates the plugins employed within the service. For this case, the "serverless-offline" plugin is utilized. The "serverless-offline" plugin emulates the behavior of AWS λ and API Gateway on your local machine, significantly expediting the development process. It initializes an HTTP server that manages the lifecycle of requests and calls your handlers.&lt;/li&gt;
&lt;li&gt;custom: This section comprises customized configuration options for the service. Specifically, the custom configuration pertains to the "serverless-offline" plugin. The properties "httpPort", and "lambdaPort" allow for the configuration of ports utilized when running the service in offline mode.&lt;/li&gt;
&lt;li&gt;functions: This section catalogs the individual functions to be created within the service. In this scenario, there are two functions: "getUser" and "getUsers".&lt;/li&gt;
&lt;li&gt;handler: This property specifies the file housing the code for a given function. In this instance, the function "getUser" is defined in the file "dist/main.getUser", while "getUsers" resides in "dist/main.getUsers".&lt;/li&gt;
&lt;li&gt;events: This section enumerates the events that trigger the execution of functions. Both functions in this case are triggered by HTTP requests. The "getUser" function is triggered by HTTP requests to the path "/users/{id}", wherein "id" represents a path parameter. Similarly, the "getUsers" function is triggered by HTTP requests to the path "/users".&lt;/li&gt;
&lt;li&gt;provider: This section specifies the cloud provider to be utilized for deploying the service. In this case, the chosen provider is AWS (Amazon Web Services).&lt;/li&gt;
&lt;li&gt;name: This property designates the name of the AWS service that will be generated. In this case, the AWS service will be named "users".&lt;/li&gt;
&lt;li&gt;region: This property signifies the AWS region where the service will be deployed. In this instance, the chosen AWS region is "eu-west-3".&lt;/li&gt;
&lt;li&gt;runtime: This property defines the runtime environment for the service. In this case, the service will be executed within the Node.js 16.x runtime environment.&lt;/li&gt;
&lt;li&gt;stage: This property denotes the stage of the service. In this instance, the stage is designated as "dev".&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Users Service Code Implementation
&lt;/h4&gt;

&lt;p&gt;Let’s examine the implementation of the Users service code. First, we have the main.ts file located in the apps/users directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// apps/users/main.ts
import { HttpStatus } from '@nestjs/common';
import { NestFactory } from '@nestjs/core';
import { Callback, Context, Handler } from 'aws-lambda';
import { UsersModule } from './users.module';
import { UsersService } from './users.service';

export const getUser: Handler = async (
  event: any,
  _context: Context,
  _callback: Callback,
) =&amp;gt; {
  const appContext = await NestFactory.createApplicationContext(UsersModule);
  const appService = appContext.get(UsersService);
  const { id } = event.pathParameters;
  try {
    const res = await appService.getUser(id);
    return {
      statusCode: HttpStatus.OK,
      body: JSON.stringify(res),
    };
  } catch (error) {
    console.log(error);
    return {
      statusCode: HttpStatus.BAD_REQUEST,
      body: JSON.stringify(error.response ?? error.message),
    };
  }
};

export const getUsers: Handler = async (
  _event: any,
  _context: Context,
  _callback: Callback,
) =&amp;gt; {
  const appContext = await NestFactory.createApplicationContext(UsersModule);
  const appService = appContext.get(UsersService);
  try {
    const res = await appService.getUsers();
    return {
      statusCode: HttpStatus.OK,
      body: JSON.stringify(res),
    };
  } catch (error) {
    console.log(error);
    return {
      statusCode: HttpStatus.BAD_REQUEST,
      body: JSON.stringify(error.response ?? error.message),
    };
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the main.ts file, we define two AWS Lambda handlers, getUser and getUsers, which are responsible for handling the corresponding API endpoints. Here's a breakdown of the code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;getUser handler: This function receives an HTTP event containing the id parameter in the path. It creates an application context using NestFactory.createApplicationContext with the UsersModule. Then, it retrieves an instance of the UsersService from the application context. The getUser method of the service is called with the id, and the result is returned as an HTTP response with a status code of 200 (OK).&lt;/li&gt;
&lt;li&gt;getUsers handler: This function doesn't require any parameters. It follows a similar process to the getUser handler. It creates an application context, retrieves the UsersService, and calls the getUsers method. The resulting array of users is returned as an HTTP response with a status code of 200 (OK).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, let’s take a look at the UsersService implementation in the users.service.ts file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// apps/users/users.service.ts
import { Injectable, InternalServerErrorException } from '@nestjs/common';
import { DynamoDB } from 'aws-sdk';

const db = new DynamoDB.DocumentClient({
  convertEmptyValues: true,
  paramValidation: true,
});

@Injectable()
export class UsersService {
  async getUser(id: string) {
    const res = await db
      .get({
        TableName: process.env.DYNAMODB_TABLE,
        Key: { id },
        AttributesToGet: ['id', 'email', 'firstName', 'lastName'],
      })
      .promise();
    if (res.$response.error || !res.Item) {
      throw new InternalServerErrorException(res.$response.error);
    }
    return res.Item;
  }
 async getUsers() {
    const res = await db
      .scan({
        TableName: process.env.DYNAMODB_TABLE,
        AttributesToGet: ['id', 'email', 'firstName', 'lastName'],
      })
      .promise();
    if (res.$response.error) {
      throw new InternalServerErrorException(res.$response.error.message);
    }
    return res.Items;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The UsersService class provides the business logic for handling user-related operations. Here's a breakdown of the code:&lt;/p&gt;

&lt;p&gt;getUser(id: string): This method retrieves a user from the DynamoDB table based on the provided id. It uses the get method of the DynamoDB.DocumentClient to fetch the user's data from the table. If there is an error or the item is not found, an InternalServerErrorException is thrown. Otherwise, the retrieved user is returned.&lt;/p&gt;

&lt;p&gt;getUsers(): This method retrieves all users from the DynamoDB table. It uses the scan method of the DynamoDB.DocumentClient to perform a scan operation on the table. If there is an error during the scan operation, an InternalServerErrorException is thrown. Otherwise, the array of retrieved users is returned.&lt;/p&gt;

&lt;p&gt;Finally, let’s look at the UsersModule defined in the users.module.ts file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// apps/users/users.module.ts
import { Module } from '@nestjs/common';
import { UsersService } from './users.service';
@Module({
  imports: [],
  providers: [UsersService],
})
export class UsersModule {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The UsersModule is a basic NestJS module that imports no other modules (imports: []) and provides the UsersService as a provider (providers: [UsersService]).&lt;/p&gt;

&lt;p&gt;In summary, the code provided demonstrates the implementation of the Users service in the NestJS monorepo. It includes Lambda handlers for handling API requests, a UsersService class for performing user-related operations using DynamoDB, and a UsersModule that defines the service as a provider.&lt;/p&gt;

&lt;p&gt;In addition to that, a typescript configuration file needs to be added that provides compiler options for the Users service. It extends the root tsconfig.json file located in the monorepo root directory and provides specific compiler options and file inclusion/exclusion rules for the Users service within the NestJS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//apps/users/tsconfig.app.json
{
  "extends": "../../tsconfig.json",
  "compilerOptions": {
    "declaration": false,
    "outDir": "./dist",
  },
  "include": ["src/**/*", "src/*"],
  "exclude": ["node_modules", "dist", ".build", "test", "**/*spec.ts"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  DynamoDB Setup
&lt;/h4&gt;

&lt;p&gt;As you can see or notice we are using DynamoDB as the main database for our application and it needs some setup with a serverless framework to work properly so below you can see how our file will look when we add Dynamodb.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#apps/users/serverless.yaml
service: users

plugins:
  - serverless-offline

custom:
    serverless-offline:
        httpPort: 3003
        lambdaPort: 3005

functions: 
  getUser:
    handler: dist/main.getUser
    events:
      - http:
          method: GET
          path: /users/{id}
          request: 
            parameters: 
              paths: 
                id: true
  getUsers:
    handler: dist/main.getUsers
    events:
      - http:
          method: GET
          path: /users

provider:
  name: aws
  region: eu-west-3
  runtime: nodejs16.x
  stage: dev
  environment:
    DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:Query
        - dynamodb:Scan  
        - dynamodb:GetItem
        - dynamodb:UpdateItem
      Resource: 
        Fn::Join:
          - ''
          - - "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/"
            - ${self:provider.environment.DYNAMODB_TABLE}

resources:
  Resources:
    UsersTable:
      Type: AWS::DynamoDB::Table
      DeletionPolicy: Retain
      Properties:
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH
        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1
        TableName: ${self:provider.environment.DYNAMODB_TABLE}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So as we can see our serverless file has been updated to add the following things:&lt;/p&gt;

&lt;p&gt;IAM permissions: The IAM role for the users service has been updated to allow the following DynamoDB operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scan&lt;/li&gt;
&lt;li&gt;GetItem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DynamoDB table: The DynamoDB table is used to store data for the users service. The table name is a concatenation of the service name and the stage. This allows you to have multiple tables for the same service, each with a different stage.&lt;/p&gt;

&lt;p&gt;Resources: A new DynamoDB table resource has been added to the users service. This resource defines the DynamoDB table that will be created for the service.&lt;/p&gt;

&lt;h4&gt;
  
  
  Items Service Implementation
&lt;/h4&gt;

&lt;p&gt;Now let’s focus on the implementation of the Items service. We’ll take a closer look at the files that describe the Items service, its model, main file, and serverless.yaml. We’ll also cover the setup of DynamoDB and the usage of the “serverless offline” plugin&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#apps/items/serverless.yaml
service: items

plugins:
  - serverless-offline

custom:
    serverless-offline:
        httpPort: 3006
        lambdaPort: 3008

functions:
  createItem:
    handler: dist/main.createItem
    events:
      - http:
          method: POST
          path: /items
  getItem:
    handler: dist/main.getItem
    events:
      - http:
          method: GET
          path: /items/{id}
          request: 
            parameters: 
              paths: 
                id: true
  getItems:
    handler: dist/main.getItems
    events:
      - http:
          method: GET
          path: /items

provider:
  name: aws
  region: eu-west-3
  runtime: nodejs16.x
  stage: dev
  environment:
    DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}

  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:Scan  
        - dynamodb:GetItem
        - dynamodb:PutItem
      Resource: 
        Fn::Join:
          - ''
          - - "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/"
            - ${self:provider.environment.DYNAMODB_TABLE}

resources:
  Resources:
    ItemsTable: 
      Type: AWS::DynamoDB::Table
      DeletionPolicy: Retain
      Properties:
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH
        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1
        TableName: ${self:provider.environment.DYNAMODB_TABLE}  

//apps/items/items.module.ts
import { Module } from '@nestjs/common';
import { ItemsService } from './items.service';

@Module({
  imports: [],
  providers: [ItemsService],
})
export class ItemsModule {}

//app/items/items.service.ts
import { Injectable, InternalServerErrorException } from '@nestjs/common';
import { v1 } from 'uuid';
import { DynamoDB } from 'aws-sdk';

const db = new DynamoDB.DocumentClient();

@Injectable()
export class ItemsService {
  async createItem(item: any) {
    const { title, description } = item;
    const createdOn = new Date().getTime();

    const data = {
      TableName: process.env.DYNAMODB_TABLE,
      Item: {
        id: v1(),
        title,
        description,
        createdOn,
      },
    };

    try {
      await db.put(data).promise();
      return item;
    } catch (error) {
      throw new InternalServerErrorException(error.message);
    }
  }

  async getItem(id: string) {
    const params = {
      TableName: process.env.DYNAMODB_TABLE,
      Key: { id },
    };

    try {
      const result = await db.get(params).promise();
      return result.Item;
    } catch (error) {
      throw new InternalServerErrorException(error.message);
    }
  }

  async getItems() {
    const params = {
      TableName: process.env.DYNAMODB_TABLE,
    };

    try {
      const result = await db.scan(params).promise();
      return result.Items;
    } catch (error) {
      throw new InternalServerErrorException(error.message);
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tsconfig.app.json file for the Items service is the same as the one used for the Users service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying to AWS
&lt;/h3&gt;

&lt;p&gt;To deploy our serverless application to AWS using the Serverless Framework, we can start by adding the following scripts to our package.json file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
  "build:items": "nest build --tsc items",
  "build:users": "nest build --tsc users",
  "build:all": "npm run build:users &amp;amp;&amp;amp; npm run build:items",
  "deploy": "npm install &amp;amp;&amp;amp; npm run build:all &amp;amp;&amp;amp; npx serverless deploy",
  ....
 },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These scripts provide a convenient way to build and deploy the application, allowing you to easily compile the TypeScript code and package it for deployment using the Serverless framework.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"build:api": "nest build --tsc api": This script builds the API module using the NestJS CLI. It compiles the TypeScript code in the api module and generates the corresponding JavaScript files.&lt;/li&gt;
&lt;li&gt;"build:items": "nest build --tsc items": This script builds the items module using the NestJS CLI. It compiles the TypeScript code in the items module and generates the corresponding JavaScript files.&lt;/li&gt;
&lt;li&gt;"build:users": "nest build --tsc user": This script builds the users module using the NestJS CLI. It compiles the TypeScript code in the users module and generates the corresponding JavaScript files.&lt;/li&gt;
&lt;li&gt;"build:all": "npm run build:items &amp;amp;&amp;amp; npm run build:users": This script is a convenience script that runs the build scripts for all the modules (items, and users) sequentially. It ensures that all the modules are built before proceeding to the deployment.&lt;/li&gt;
&lt;li&gt;"deploy": "npm install &amp;amp;&amp;amp; npm run build:all &amp;amp;&amp;amp; npx serverless deploy": This script handles the deployment process. It first installs the required dependencies (npm install), builds all the modules (npm run build:all), and then deploys the application using the Serverless framework (npx serverless deploy).&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Adding the necessary packages and deploying to AWS:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package:
  exclude:
    - ../../node_modules/**
    - ./src/**
    - ./test/**
  include:
    - '../../node_modules/@nestjs/common/**'
    - '../../node_modules/@nestjs/core/**'
    - '../../node_modules/@nestjs/schematics/**'
    - '../../node_modules/@nestjs/testing/**'
    - '../../node_modules/tslib/**'
    - '../../node_modules/reflect-metadata/**'
    - '../../node_modules/uid/**'
    - '../../node_modules/rxjs/**'
    - '../../node_modules/iterare/**'
    - '../../node_modules/@nuxtjs/**'
    - '../../node_modules/fast-safe-stringify/**'
    - '../../node_modules/path-to-regexp/**'
    - '../../node_modules/cache-manager/**'
    - '../../node_modules/class-transformer/**'
    - '../../node_modules/class-validator/**'
    - '../../node_modules/cache-manager/**'
    - '../../node_modules/@angular-devkit/**'
    - '../../node_modules/jsonc-parser/**'
    - '../../node_modules/pluralize/**'
    - '../../node_modules/body-parser/**'
    - '../../node_modules/cors/**'
    - '../../node_modules/express/**'
    - '../../node_modules/multer/**'
    - '../../node_modules/@vendia/**'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the provided code snippet, the package key is added to the serverless files for both the items and users services. This key is used to configure the packaging process of the Lambda functions, determining which files and directories should be included or excluded in the deployment package.&lt;/p&gt;

&lt;p&gt;The exclude property within the package the configuration specifies patterns to exclude files and directories from the deployment package. In this case, the patterns ../../node_modules/**, ./src/**, and ./test/** are used. The ../../node_modules/** pattern excludes all files and directories within the node_modules directory, ensuring that dependencies installed via npm or yarn are not included in the deployment package. The ./src/** and ./test/** patterns exclude the source code and test files, respectively, as they are not required for the deployed application to function properly.&lt;/p&gt;

&lt;p&gt;On the other hand, the include property lists specific packages that are necessary for the Lambda functions to execute correctly. The packages specified in the include property are included in the deployment package, ensuring that the functions have access to their required dependencies. In the provided code snippet, various NestJS packages (@nestjs/common, @nestjs/core, @nestjs/schematics, @nestjs/testing), as well as other essential packages (tslib, reflect-metadata, uid, rxjs, iterare, @nuxtjs, fast-safe-stringify, path-to-regexp, cache-manager, class-transformer, class-validator, @angular-devkit, jsonc-parser, pluralize, body-parser, cors, express, multer, @vendia) are included.&lt;/p&gt;

&lt;p&gt;By configuring the package property in this way, we optimize the deployment package size by excluding unnecessary files and only including the required packages. This results in faster deployment times and reduces the memory footprint of the Lambda functions.&lt;/p&gt;

&lt;p&gt;Once the package configuration is in place, and running the command npm run deploy in the root directory triggers the deployment process. The Serverless framework packages the Lambda functions along with the specified packages and deploys them to AWS.&lt;/p&gt;

&lt;p&gt;Overall, the package configuration helps address two important issues: including the required packages in the deployment and excluding unnecessary files to optimize the package size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running it offline
&lt;/h3&gt;

&lt;p&gt;Before we dive into the technical details, let’s address a fundamental question: why should we run our serverless functions offline in the first place? By doing so, we unlock several advantages that significantly enhance our development process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ease of Development:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Running serverless functions offline provides developers with the freedom to work on their code locally, without the need for a live serverless infrastructure, resulting in a faster and more efficient development workflow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cost Savings: Optimizing Your Budget&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Serverless functions are billed based on their usage, such as the number of invocations and execution duration. Deploying and testing functions in a live environment during development can lead to unexpected costs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rapid Iterations: Accelerating Your Progress&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Offline execution empowers developers to iterate rapidly and test their serverless functions without any deployment delays. With the ability to make code changes, run functions locally, and observe immediate results.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Isolation and Debugging: Mastering the Art of Troubleshooting&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Running serverless functions offline provides a controlled and isolated environment for debugging, this isolation simplifies the troubleshooting process, enabling us to identify and fix issues without the complexities introduced by cloud-based environment.&lt;/p&gt;

&lt;p&gt;Now, let’s explore how we can implement offline execution for our serverless functions using the Serverless Framework. Going back to our package.json file, we can add the following scripts to enable offline execution for specific services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
  "start:dev:api": "npm run build:api &amp;amp;&amp;amp; serverless api:offline",
  "start:dev:items": "npm run build:items &amp;amp;&amp;amp; serverless items:offline",
  "start:dev:users": "npm run build:users &amp;amp;&amp;amp; serverless users:offline",
  "start:dev": "concurrently \"npm run start:dev:items\" \"npm run start:dev:users\"",
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The added scripts in the package.json file allows us to run the items and users services offline for development and testing purposes. Here's an explanation of each script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"start:dev:items": "npm run build:items &amp;amp;&amp;amp; serverless items:offline" azeazthis script builds the items service by running npm run build:items, which compiles the TypeScript code into JavaScript, after the build process, it starts the items service in offline mode using the serverless items:offline command, running the items service offline allows us to test and work with it locally without the need for a live serverless infrastructure.&lt;/li&gt;
&lt;li&gt;"start:dev:users": "npm run build:users &amp;amp;&amp;amp; serverless users:offline" This script builds the users service by running npm run build:users, which compiles the TypeScript code into JavaScript, after the build process, it starts the users service in offline mode using the serverless users:offline command, running the users service offline enables us to test and interact with it locally without deploying it to a production environment.&lt;/li&gt;
&lt;li&gt;"start:dev": "concurrently \"npm run start:dev:items\" \"npm run start:dev:users\"" This script uses the concurrently package to run multiple scripts concurrently, It starts the items and users services simultaneously in offline mode running both services together allows us to test the integration between the items and users services locally, mimicking the behavior of a real production environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By running these scripts, we can easily start the items and users services locally and test them in an offline mode. This setup enables faster development iterations and facilitates debugging without the need for a fully deployed infrastructure. It provides a convenient way to work on and test specific services independently or together as part of a larger system.&lt;/p&gt;

&lt;p&gt;In this post, we have built our application using a serverless framework combined with Nestjs, there still be a lot of features in both of these tools that we will discover in the upcoming posts, and we will also fix some mistakes and improve our app by adding more features to it.&lt;/p&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.nestjs.com/first-steps" rel="noopener noreferrer"&gt;Documentation | NestJS - A progressive Node.js framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nestjs.com/standalone-applications" rel="noopener noreferrer"&gt;Documentation | NestJS - A progressive Node.js framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nestjs.com/cli/monorepo" rel="noopener noreferrer"&gt;Documentation | NestJS - A progressive Node.js framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nestjs.com/faq/serverless" rel="noopener noreferrer"&gt;Documentation | NestJS - A progressive Node.js framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/framework/docs/providers/aws/guide/packaging" rel="noopener noreferrer"&gt;Serverless Framework - Packaging&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/framework/docs/tutorial" rel="noopener noreferrer"&gt;Tutorial: Your First Serverless Framework Project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.serverless.com/framework/docs/guides/compose" rel="noopener noreferrer"&gt;Serverless Framework - Composing services&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.serverless.com/guides/dynamodb" rel="noopener noreferrer"&gt;https://www.serverless.com/guides/dynamodb&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.udemy.com/course/aws-lambda-serverless/" rel="noopener noreferrer"&gt;https://www.udemy.com/course/aws-lambda-serverless/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
