<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Stanislav Ivanov</title>
    <description>The latest articles on Forem by Stanislav Ivanov (@stanivanov19).</description>
    <link>https://forem.com/stanivanov19</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/stanivanov19"/>
    <language>en</language>
    <item>
      <title>4 Tips for better permissions management in AWS IAM</title>
      <dc:creator>Stanislav Ivanov</dc:creator>
      <pubDate>Thu, 06 Jul 2023 04:43:13 +0000</pubDate>
      <link>https://forem.com/stanivanov19/4-tips-for-better-permissions-management-in-aws-iam-49dg</link>
      <guid>https://forem.com/stanivanov19/4-tips-for-better-permissions-management-in-aws-iam-49dg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--360-Mc1i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x2hzq8ogmdycw4a4h8sl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--360-Mc1i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x2hzq8ogmdycw4a4h8sl.png" alt="AWS IAM Header" width="680" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Managing permissions in the cloud can be a daunting task. The abundance of options and menus in AWS IAM (Identity and Access Management) often leads to confusion and, at times, frustration. However, getting permission management right is crucial for maintaining a secure and efficient cloud infrastructure. In this article, we will explore some best practices and strategies to simplify and optimize permission management in AWS IAM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Embrace Roles for Software Authorization:
When authorizing your software to communicate with AWS services through the AWS SDK, it is recommended to use Roles instead of individual user credentials. By creating dedicated roles for your applications, you can ensure secure communication and minimize the risk associated with exposing user credentials. Roles serve as a means to authorize your software with AWS, enabling seamless and secure interactions.&lt;/li&gt;
&lt;li&gt;Secure S3 Buckets with Custom Resource-Based Policies:
By default, it is advisable to keep all your S3 Buckets closed to public access unless your specific use case requires public access. To manage access effectively, employ custom resource-based policies. These policies provide granular control over who can access your S3 Buckets and how they can interact with the stored objects. Implementing resource-based policies helps maintain data privacy and prevent unauthorized access.&lt;/li&gt;
&lt;li&gt;Granular Permissions: Role Separation and Temporary Assumption:
Granting users broad sets of permissions, beyond what they need for their regular duties, can introduce unnecessary security risks. Instead, opt for role separation by assigning specific permissions to users based on their tasks. By separating permissions into roles, you can ensure that users only have access to the necessary resources and actions. Additionally, consider allowing users to temporarily assume roles when performing specific tasks, further limiting their permissions and enhancing security.&lt;/li&gt;
&lt;li&gt;Leverage User Groups for Streamlined Permission Management:
To simplify permission management and maintain consistency across your organization, make extensive use of user groups. Group users based on their roles and responsibilities, and assign permissions to the groups rather than individual users. This approach reduces administrative overhead, ensures uniform access control policies, and allows for efficient onboarding and offboarding of users. By leveraging user groups, you establish a scalable and manageable permission structure.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Effectively managing permissions in AWS IAM is essential for maintaining a secure and efficient cloud environment. By following these best practices, you can simplify the permission management process, reduce security risks, and ensure the right level of access for your users. Embrace roles for software authorization, secure S3 Buckets with resource-based policies, adopt granular permissions through role separation and temporary assumption, and leverage user groups for streamlined permission management. By implementing these strategies, you can achieve optimal permission management in the cloud and establish a robust security posture for your AWS infrastructure.&lt;/p&gt;

&lt;p&gt;Remember, permission management is an ongoing process that requires regular reviews and updates to adapt to changing requirements and mitigate evolving security risks. Stay proactive, stay vigilant, and continue to refine your permission management practices to protect your cloud resources effectively.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>permission</category>
      <category>tip</category>
    </item>
    <item>
      <title>CQRS: Separating Responsibilities for Efficient Data Management</title>
      <dc:creator>Stanislav Ivanov</dc:creator>
      <pubDate>Mon, 26 Jun 2023 04:50:46 +0000</pubDate>
      <link>https://forem.com/stanivanov19/cqrs-separating-responsibilities-for-efficient-data-management-4dfg</link>
      <guid>https://forem.com/stanivanov19/cqrs-separating-responsibilities-for-efficient-data-management-4dfg</guid>
      <description>&lt;h3&gt;
  
  
  Breakdown
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Overview&lt;/li&gt;
&lt;li&gt;What is CQRS ?

&lt;ul&gt;
&lt;li&gt;Definition and main components&lt;/li&gt;
&lt;li&gt;Separation of concerns&lt;/li&gt;
&lt;li&gt;Difference to the typical CRUD model&lt;/li&gt;
&lt;li&gt;Scalability benefits&lt;/li&gt;
&lt;li&gt;Added complexity to the overall development&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Main components

&lt;ul&gt;
&lt;li&gt;Commands&lt;/li&gt;
&lt;li&gt;Queries&lt;/li&gt;
&lt;li&gt;Events&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Data storage

&lt;ul&gt;
&lt;li&gt;Data separation and projections&lt;/li&gt;
&lt;li&gt;Data store flexibility and diversity of options&lt;/li&gt;
&lt;li&gt;Synchronization and eventual consistency&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Use cases and what problems does it solve&lt;/li&gt;
&lt;li&gt;Some notes on event-sourcing&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;In the following article, we will delve into the concept of &lt;strong&gt;Command and Query Responsibility Segregation&lt;/strong&gt; (CQRS), what need is there for us to consider it, what level of complexity it can bring, and when it is completely justified to adopt this architecture. Additionally, I will touch on its components and different combinations you can typically see it implemented with, like event-driven architectures or even event-sourcing. Although the article will not go into too much detail about side topics.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is CQRS ?
&lt;/h3&gt;

&lt;p&gt;Command and Query Responsibility Segregation (CQRS) is a software architecture pattern that separates the responsibilities of read and write operations in a system. Instead of using the same models for both reading and writing data, CQRS introduces a clear distinction between the two operations.&lt;/p&gt;

&lt;p&gt;At its core, CQRS recognizes that read and write operations have different requirements and can benefit from different optimizations. By segregating these responsibilities, CQRS enables developers to design systems that are more efficient, scalable, and maintainable.&lt;/p&gt;

&lt;p&gt;In a traditional CRUD (Create, Read, Update, Delete) system, the same models and data access layers are often used for both reading and writing data. However, as systems grow in complexity, this approach can lead to performance bottlenecks and limited scalability. CQRS addresses these issues by introducing separate models and data access layers for read and write operations.&lt;/p&gt;

&lt;p&gt;Implementing CQRS introduces additional complexity compared to traditional approaches. Developers need to design and maintain separate models, data access layers, and possibly even different storage mechanisms for read and write operations. However, this added complexity can be justified in scenarios where the benefits of scalability, performance, and maintainability outweigh the costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Main components
&lt;/h3&gt;

&lt;p&gt;The key idea behind CQRS is that write operations, also known as commands, focus on changing the system's state. These commands typically involve validation, business logic, and updating the underlying data store. On the other hand, read operations, known as queries, are concerned with retrieving data without modifying the system's state. Queries are performed on materialized views, which are precomputed representations of the data optimized for specific read patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commands
&lt;/h3&gt;

&lt;p&gt;Commands represent the intent to perform a specific action or change within the system. They encapsulate the necessary information and parameters to execute a command-oriented operation. Commands are responsible for initiating changes to the data or triggering relevant business logic. This type of operation is responsible for performing only 1 change to the system state, after the command is received and recorded by the command handler, the system can trigger different side effects based on the type and essence of the command. For example, check the chart below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--moCGVDMa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5mlmtpo6m2uhzbs5e4pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--moCGVDMa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5mlmtpo6m2uhzbs5e4pv.png" alt="Commands chain structure" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Queries
&lt;/h3&gt;

&lt;p&gt;Unlike commands, which focus on write operations, queries are concerned with retrieving data based on specific criteria or conditions. They represent the questions we ask the system to obtain relevant information. For example, querying for a list of products that meet certain criteria or retrieving user details based on a specific identifier. Queries are used to fetch information or perform read operations on the data model.&lt;/p&gt;

&lt;p&gt;Queries are typically executed by query handlers, which are responsible for retrieving the requested data from appropriate data stores, such as databases or caches. The query handler receives the query, performs the necessary operations, and returns the result set or data object to the caller.&lt;/p&gt;

&lt;p&gt;Additionally queries should not be used to perform state changes or trigger any side effects in the system.&lt;/p&gt;

&lt;p&gt;Because the purpose of CQRS is to place a “wall” between both sides, we can be even more flexible and design the system in a way that allows us to send the commands to one application, and the queries to another if there is the need for it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hqJO4jXY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbtqesc438n3kn1nu1xz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hqJO4jXY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbtqesc438n3kn1nu1xz.png" alt="Query the same data store as writing to" width="688" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PKSun5wr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1afw76tgml0igpixl0it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PKSun5wr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1afw76tgml0igpixl0it.png" alt="Query different database" width="678" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By separating the read and write responsibilities, CQRS allows each side to be optimized independently. This means that read models can be denormalized, tailored for specific use cases, and optimized for efficient querying. On the write side, models can focus solely on capturing the intent of the operation and enforcing business rules.&lt;/p&gt;

&lt;p&gt;One of the main advantages of CQRS is its scalability options. Since read and write operations are decoupled, it becomes easier to scale each side independently based on the specific demands of the system. For example, read models can be replicated or distributed across multiple nodes to handle high read loads, while write models can be optimized for write-intensive operations.&lt;/p&gt;

&lt;p&gt;CQRS is not a one-size-fits-all solution and should be considered in cases where a clear distinction between read and write responsibilities is necessary. It is particularly beneficial in scenarios where the read and write patterns differ significantly, such as in business intelligence or reporting systems. CQRS also shines in complex domains where the separation of concerns improves code readability and maintainability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Events
&lt;/h3&gt;

&lt;p&gt;When adopting CQRS, it's common to leverage event-driven messaging to handle communication between different components. Events are used to propagate changes made by write operations and can be processed asynchronously by various consumers. This allows for loose coupling, scalability, and the potential for eventual consistency across the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FzmatYNe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7zslsiyjv3i148x3f8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FzmatYNe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7zslsiyjv3i148x3f8e.png" alt="Event-driven side effects" width="796" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data storage
&lt;/h3&gt;

&lt;p&gt;You have several options when it comes to modeling your data management. Of course, you can implement a single data store, so that the commands write to the same store the queries read from. Which has a lot of added value, by ensuring data consistency and perhaps some cost optimization. &lt;/p&gt;

&lt;p&gt;There is also the option to break down the way data is written and read. We can have the commands write the data to a table in one schema for example, and with projection views we can reconstruct this data in other tables elsewhere. Of course, we can push past these limits and start reading from a completely different data store than the one we are writing to. Image is a use case where we write all commands and their result to a relational database, and with the addition of some synchronization, we can read the projection of this data from a NoSql database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EK_S_WHR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/of6dsyyuuq1tig6i29iy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EK_S_WHR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/of6dsyyuuq1tig6i29iy.png" alt="Different data stores" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K3v1QeIo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svq7jl50vf7oor9gye77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K3v1QeIo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svq7jl50vf7oor9gye77.png" alt="Use same data store, different table" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Eventual consistency
&lt;/h3&gt;

&lt;p&gt;We define eventual consistency by referring to a state where all copies of data will be consistent and hold the same information, but not at first or immediately.&lt;/p&gt;

&lt;p&gt;However, it's important to note that eventual consistency can introduce challenges in terms of data synchronization between the read and write sources. Strategies like event sourcing, where events are stored as the system's source of truth, can be used to address these challenges. Event sourcing complements CQRS by providing a reliable log of events that can be used to rebuild materialized views or synchronize read models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use cases and what problems can CQRS solve
&lt;/h3&gt;

&lt;p&gt;Let’s start this part with one important thought, do not try to solve problems you don’t have currently or will not have in the near future. Look at what’s in front of you. If CQRS would bring too many changes and requirements for development, no need to focus on it when you’re a startup trying to enter a market.&lt;/p&gt;

&lt;p&gt;Here are a few use cases for this architecture that I think are quite relevant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to focus on scaling read from write capacity separately. Perhaps you need to increase and optimize your read throughput or maybe your write capacity and speed is holding you back&lt;/li&gt;
&lt;li&gt;Loose coupling - You want to decouple your application, and keep the domain or business logic cleaner.&lt;/li&gt;
&lt;li&gt;You want further flexibility in your storage solutions, where you can choose from different technology stacks for each of side of the application&lt;/li&gt;
&lt;li&gt;You are not concerned about eventual consistency (or rather delayed consistency)&lt;/li&gt;
&lt;li&gt;It is great at reducing complexity in entangled code bases, where the domain logic has risen tremendously. Separate the different concerns in the operations, which achieves good readability and understanding of the overall processes.&lt;/li&gt;
&lt;li&gt;You need to perform business intelligence or reporting/analytics where the read patterns can really differ. Read queries can contain a lot of different data and combinations of data. In cases where you need to think about BI and combining, and querying data you will need to consider something like a warehousing solution. You can give a separate projection for BI purposes, not interfering with the regular database for OLTP -  online transaction processing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Some notes on event-sourcing
&lt;/h3&gt;

&lt;p&gt;When combined with event-sourcing, the system stores all changes as an unchangeable sequence of events, enabling reliable auditing, temporal querying, and the ability to rebuild state at any point in time. This combination provides a solid foundation for building highly scalable, maintainable, and event-driven applications. CQRS with event-sourcing revolutionizes data management by introducing a new level of flexibility, auditability, and temporal querying capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UDYejOsc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yevdqbn13ygl8q7dhhwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UDYejOsc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yevdqbn13ygl8q7dhhwq.png" alt="Event-sourcing schema example" width="561" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, CQRS is a powerful pattern that offers advantages in terms of scalability, performance, and maintainability. By separating the responsibilities of read and write operations, developers can optimize each side independently and design systems that better align with specific requirements. While implementing CQRS introduces complexity, it can be a valuable approach in scenarios where the benefits outweigh the costs, such as in complex domains or systems with distinct read and write patterns.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>api</category>
      <category>database</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Should you be using serverless? My use of serverless workflows</title>
      <dc:creator>Stanislav Ivanov</dc:creator>
      <pubDate>Wed, 24 May 2023 07:05:29 +0000</pubDate>
      <link>https://forem.com/stanivanov19/should-you-be-using-serverless-my-use-of-serverless-workflows-1jnp</link>
      <guid>https://forem.com/stanivanov19/should-you-be-using-serverless-my-use-of-serverless-workflows-1jnp</guid>
      <description>&lt;p&gt;There are a lot of opinions on the subject of serverless, whether it has a place in 2023, and how traditional architectures are better, but I believe there are still good use cases for serverless, just as any tool that exists. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: A lot of what I will write about is in the context of AWS, as this is the system I am most familiar with, although, to a great degree, the same concepts can be applied to any other cloud provider.&lt;/p&gt;

&lt;p&gt;After Amazon came out and said they are moving away from the serverless technology for their Prime service, many people thought that serverless was bad in a way and wouldn’t do a good job. The fact that Amazon decided to change its architecture and not use serverless in this specific service, doesn’t mean in the slightest that serverless wouldn’t do a good job in another case, another company, scale, etc. &lt;/p&gt;

&lt;p&gt;When it comes to a startup, prototype, or system that can benefit from distribution, or PoC for example in my opinion the flexibility that a serverless and distributed architecture can prove is really valuable. You will not be forced to think about scaling and handling servers, while you can focus on the software solution you are building. Usually when required the cloud platform can scale your whole serverless infrastructure automatically so it would answer the increasing demand. In addition, you will not be forced to pay for capacity you do not need or are currently not using, for example, you had a standby VM, that you are paying for, but for some reason you not getting frequent enough traffic to justify it. &lt;/p&gt;

&lt;p&gt;Serverless is not a one size fits all type of solution, its really not that great in many cases such as, but not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API development&lt;/li&gt;
&lt;li&gt;Running jobs that will require time to finish - batch jobs, processing, etc.&lt;/li&gt;
&lt;li&gt;Tightly coupled systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cold start
&lt;/h3&gt;

&lt;p&gt;There are several reasons why you wouldn’t generally want to go with serverless in these cases. Let’s start with the problem of the cold start. If you want to build an API, you will need for example an API gateway and Lambda (in the case of AWS) functions to take care of the logic. However, when the function was not invoked for some time, it starts to “sleep”, awaiting new requests. When the request finally arrives it takes some time for the cloud provider to prep the environment and dependencies for example. Which is definitely going to cause slow API responses in some cases. There are ways to mitigate this, but they almost always involve some cost. In most cases, traditional monoliths or microservices will be the way to go for APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Invocation duration costs
&lt;/h3&gt;

&lt;p&gt;Serverless functions are typically billed not only for invocation counts but for the duration of the processing, thus potentially increasing the cost significantly if the request takes time to process.&lt;/p&gt;

&lt;p&gt;Additionally, it’s quite possible for the performance to suffer in these cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud provider binding
&lt;/h3&gt;

&lt;p&gt;Also let’s not forget, choosing to implement a serverless structure gets you stuck with a specific cloud services provider, which on the cons side can mean that you are less flexible in the future in case you need to change the architecture or the provider. Not to mention that, if the provider decides to change some of the prices and conditions for the services you will be using, this will directly impact your business.&lt;/p&gt;

&lt;h3&gt;
  
  
  Good use cases
&lt;/h3&gt;

&lt;p&gt;However, if these concerns don’t mean much to you, you can benefit from a distributed or event-driven structure, you want to save from capacity costs when you start your business/project or you can make use of some asynchronous processing in your architecture (even if only a part of your system) I believe serverless can be of use to you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrations
&lt;/h3&gt;

&lt;p&gt;One of the best selling points to serverless is how the individual services can be easily integrated with each other. For example, uploading a file to S3 can trigger an automatic process in a serverless function and do some operations. In the same way, you can work with NoSql db (DynamoDb in particular), different streams of data, scheduled events to run jobs, asynchronous processing services such as SNS (Notification service) and SQS (Queue service), and many many others.&lt;/p&gt;

&lt;h3&gt;
  
  
  CDK and Infrastructure as code
&lt;/h3&gt;

&lt;p&gt;It is really easy to deal with all of this infrastructure, which at first can be really intimidating, with a tool like AWS cloud development kit, and its derivatives like SST. With them, you can use code to define and configure all of the infrastructure you need, alongside the logic for the Lambda functions for example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Workflows
&lt;/h3&gt;

&lt;p&gt;Although I wanted to make this my main point, I decided to only graze the surface with what I intend to use automated serverless workflows for and go over some of the most suitable use cases.&lt;/p&gt;

&lt;p&gt;With serverless workflows, or AWS Step functions in particular, you can automate different flows in your business, having the configuration you set up for the flow runs the entire logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1kKaRm6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rlmzye9cavkfknoelk2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1kKaRm6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rlmzye9cavkfknoelk2y.png" alt="Step functions workflow with Lambda" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With step functions, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a sequence of lambda functions run in order, one after the other to perform some sort of processing on your behalf&lt;/li&gt;
&lt;li&gt;Add SNS events and run the flow only when an event is received&lt;/li&gt;
&lt;li&gt;Add logic branching based on parameters, for example, if you want to have a parameter in the request, where based on it being above a threshold you will route the request to a specific lambda, and in reverse, use another lambda if it does not meet the threshold&lt;/li&gt;
&lt;li&gt;Introduce manual action to the workflow by requiring human interaction to confirm an event for the state machine to continue&lt;/li&gt;
&lt;li&gt;A parallel process to run different operations on the load at the same time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many more functionalities to the step functions, than what I just listed. Traditionally you can create your own workflow inside your software, by implementing the state machine design pattern, achieving a good part of what this service provides. However, with the amount of inter-service integrations that are provided, flexible scalability, and relative ease of use there are definitely reasons for you to pick serverless workflows for your project.&lt;/p&gt;

&lt;p&gt;I am just starting a new open-source project where I can showcase some of this functionality, and within a few weeks, I will be able to share some code. What I will do is create an automated pipeline for candidate profile prequalification (It’s just a test project). I want when a new candidate profile is entered into the system, based on a set of parameters, to potentially discard applications that are not a good match or return a candidate score.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There will be a branching logic if the candidate doesn’t speak English, the profile can be discarded&lt;/li&gt;
&lt;li&gt;If the candidate is lacking a predefined amount of experience, we can require manual confirmation to continue the process.&lt;/li&gt;
&lt;li&gt;When the candidate is moved to the interview process, automatically send an email to them, and the interviewer&lt;/li&gt;
&lt;li&gt;Score the candidate based on checks we can implement with parallel processing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And many others.&lt;/p&gt;

&lt;p&gt;This is going to be my use case of this technology, there are many others, where serverless workflow management can really excel like ML, Data processing, and different processing pipelines, but I will not get into them, as they are a separate matter.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Navigating the Cloud: Creating Container Applications in AWS</title>
      <dc:creator>Stanislav Ivanov</dc:creator>
      <pubDate>Fri, 28 Apr 2023 20:43:14 +0000</pubDate>
      <link>https://forem.com/stanivanov19/navigating-the-cloud-creating-container-applications-in-aws-4kpd</link>
      <guid>https://forem.com/stanivanov19/navigating-the-cloud-creating-container-applications-in-aws-4kpd</guid>
      <description>&lt;p&gt;Containers are the portable packages we can use to wrap, execute and isolate our software. They ensure the software will run the same in any environment you run the container. A container is light, can be run by itself, and can include all your dependencies. Containers are derived from images that are created upon building your whole project from a template or a plan. Additionally, containers can be pretty similar to virtual machines, however, they are fundamentally different in the way they operate, as the containers “step” onto the host Os, whereas VMs each have their own OS “stepping” onto the hardware.&lt;/p&gt;

&lt;p&gt;We use containers in the cloud simply because they are easy to deploy and configure. Setting up your first cloud container application takes only a few minutes. In the AWS cloud, we have the choice to use ECS or EKS service - elastic container service and elastic kubernetes service. If you need to use kubernetes you can always look for EKS, however, in this article, we will be using ECS, because of the simplicity it can provide.&lt;/p&gt;

&lt;p&gt;We begin by defining our project Dockerfile and building the container image to kick off our process. The configuration of your Dockerfile will not be a part of this article, as it can be really lengthy depending on the context. There are a lot of really simple guides to creating your Dockerfile online. We can start building the project into an image when you have that out of the way. To do this we will run the build command like in the example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t tagname .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next important task is to create a container registry using the AWS ECR service - elastic container registry. A container repository is like a collection of uploaded images. AWS offers public and private repositories, and it's up to you which to choose. For the most part, they are free, depending on the usage. Create your registry simply by choosing the type - public or private, and the repo name. As soon as that's done AWS will show you a list of commands you can use to build and push your images to the repository, thus making it really easy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QQwNjIQv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xs68jnrtjptkmlkdyfez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QQwNjIQv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xs68jnrtjptkmlkdyfez.png" alt="ECR repository" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GdMoumTK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pynammf1afbwc9jitlgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GdMoumTK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pynammf1afbwc9jitlgq.png" alt="Create ECR repository" width="800" height="1076"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jy8TQ58A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbgsrvqtn01sdhj4e78x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jy8TQ58A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbgsrvqtn01sdhj4e78x.png" alt="ECR image push commands" width="800" height="705"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have pushed your image to EC, it will be available for us to use in our container setup and we can proceed with the ECS setup. The first resource you will create is a cluster, as it's also the first thing you will see when you open the ECS menu.  A container cluster is a way to group the deployed containers later into services and tasks. Additionally, we define the underlying infrastructure like EC2 instances within the scope of the cluster, which means that we choose which compute provider will run our software. An important part of setting up the cluster is the option to choose between EC2 and Fargate(default) infrastructure for our cluster.If you make a choice to base your cluster on AWS EC2 you will be required to configure an auto-scaling group, whether create a new one or use an existing one. One important note for the monitoring options: if you decide to use the disabled by default container insights, which may be useful as they provide data on the overall health and load of the cluster, they will incur additional charges for Cloudwatch metrics, so don't let that get catch you off guard, as it did with me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CiqY_mSn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bcjjp198fqytnez153sz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CiqY_mSn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bcjjp198fqytnez153sz.png" alt="Create ECS cluster" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mi89OM7e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8r58vokua9bex6yorie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mi89OM7e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8r58vokua9bex6yorie.png" alt="ECS Cluster infrastructure" width="800" height="878"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next is the task definition where we create the container outlines, pointing to the image that will be our app, port mapping, secrets or env variables, and health check settings. The task definition is the basic blueprint that is used in the service. It is versioned, meaning you can have and use several versions of different configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--elYhxdL2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4llnjlixirvop3sxid4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--elYhxdL2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4llnjlixirvop3sxid4.png" alt="Create ECS Task Definition" width="800" height="753"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LcVeuNPN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzklfqlxljl172izf5n1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LcVeuNPN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzklfqlxljl172izf5n1.png" alt="ECS Task Definition overview" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What follows is a choice of whether to create a service or a task. Usually, services create tasks that run the software, so essentially they are based on them, however, I recommend using standalone tasks only in a few specific cases, where you have a separate or standalone functionality or job. So for now we will follow through with the create service option. &lt;/p&gt;

&lt;p&gt;The first choice you are tasked to make is whether to use a launch type or capacity provider strategy. You can think of this as choosing between the easy option and the harder one, and the capacity-provided strategy will be harder to understand at first. If this is your first time working with ECS I can recommend going with the launch type option, as ecs will automatically create any resources it will need to run your service, such as EC2 instances, ASG(Auto Scaling Groups), and others, all according to the config of the cluster. The capacity provider strategy option instructs the service how to distribute the tasks over your defined capacity, which for EC2 instances for example can be one or more different ASG, with the choice of which provider is more important. For Fargate there are 2 predefined options - FARGATE and FARGATE_SPOT. However this setting is unnecessary in many cases, therefore we will not dwell on it more. &lt;/p&gt;

&lt;p&gt;Of course, the services/tasks are dependent on the task definition we created beforehand, so we will just point to the TD(Task Definition) and its specific version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---gZsLkoD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87zn7cda79tysd3ihk35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---gZsLkoD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87zn7cda79tysd3ihk35.png" alt="Create ECS Service" width="800" height="943"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the rest of the config options, we can simply stick with the defaults, as they will do the trick just fine in most cases. Depending on the level of customization you can choose otherwise, as for example in the service type field. In specific cases, you may need to use the daemon type for your services, although this is rarely going to be the case.&lt;/p&gt;

&lt;p&gt;If you intend to use multiple EC2 instances in your ASG if this is the compute option you have decided on, you can also configure and auto-create a load balancer, which is going to be a must.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tTfnK0J2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/io77fajypdfsmg2xw05d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tTfnK0J2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/io77fajypdfsmg2xw05d.png" alt="ECS Service Load Balancing" width="800" height="921"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One interesting option can be the service auto-scaling, this way ecs can scale your service depending on specific metrics or policies, which is really similar to the way ASG scale up and down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iVTJpUHf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9p0h725ly0l1qgcby9rt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iVTJpUHf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9p0h725ly0l1qgcby9rt.png" alt="ECS Service auto scaling" width="800" height="1034"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you hit the create button ECS you will be able to see your service in the list, and after a while a task should appear as well, indicating the process was a success. Note that the process can take a bit of time, for the health checks to pass and the configs to be completed, and also to roll the tasks deployed. Once you see the active status, you can follow any deployments or inspect the task count from the main service list or to get into more details, the deployments and events menu in the service. The container statuses can also be inspected as if you have configured multiple containers, you will be able to check on each of them individually from “Configuration and tasks”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VbYcEMbP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftgrdks6is1d0p4zhc6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VbYcEMbP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftgrdks6is1d0p4zhc6v.png" alt="Created ECS Service overview and status" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pyw9KBg_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi7j6m3lfuypjr7dkgi3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pyw9KBg_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi7j6m3lfuypjr7dkgi3.png" alt="Containers overview after service is created" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Exploring the Chain Builder Pattern for Better Code Organization: Seeking Feedback from Developers</title>
      <dc:creator>Stanislav Ivanov</dc:creator>
      <pubDate>Sun, 26 Mar 2023 16:09:21 +0000</pubDate>
      <link>https://forem.com/stanivanov19/exploring-the-chain-builder-pattern-for-better-code-organization-seeking-feedback-from-developers-25fh</link>
      <guid>https://forem.com/stanivanov19/exploring-the-chain-builder-pattern-for-better-code-organization-seeking-feedback-from-developers-25fh</guid>
      <description>&lt;p&gt;Have you ever found yourself needing help managing the order of execution in your code while trying to keep the individual functions independent of what will follow? Perhaps you have a complex algorithm that requires multiple steps, or you need to perform a sequence of operations on an object, but you're unsure where to start. If so, you're not alone.&lt;/p&gt;

&lt;p&gt;Fortunately, there is a design pattern that can help: the chain builder pattern. In this article, we'll explore this pattern in depth and explain how it can be used to manage the order of execution in your code. I`d like to think that I thought of that by myself when struggling with code optimization, but I'm pretty sure that's not the case. This is my variation of the chain of responsibility behavioral pattern, where I all of the building components are in a single class.&lt;/p&gt;

&lt;p&gt;At its core, the chain builder pattern is a way to organize a series of operations into a sequence, with the order of execution defined by the programmer. This pattern is especially useful in situations where you need to perform a series of operations on an object, but the order in which those operations are performed may vary depending on the context.&lt;/p&gt;

&lt;p&gt;The key to the chain builder pattern is the use of a variable to set the order of execution. This variable acts as a "switch" that determines which operation will be performed next. By setting this variable to different values, you can control the order in which the operations are performed.&lt;/p&gt;

&lt;p&gt;To illustrate the chain builder pattern, let's consider a simple example. Imagine you have an object that represents a car, and you need to perform a series of operations on that object to get it ready for a race. The operations might include installing a new engine, adding racing tires, and adjusting the suspension.&lt;/p&gt;

&lt;p&gt;Using the chain builder pattern, you would define each of these operations as a separate function or method, and then use a variable to control the order in which they are performed. &lt;br&gt;
Note that my example will be in PHP, but this pattern and the whole code can easily be translated to most of the OOP programming languages like Typescript for example.&lt;/p&gt;

&lt;p&gt;How I approach the problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define the underlying interface for all “Builders” - this is what I call the chain builder classes that contain the business logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rOhKZP8r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4n6515675ai0ujm4o1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rOhKZP8r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4n6515675ai0ujm4o1l.png" alt="Chain builder interface definition" width="880" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All builders will implement the facade “build” method which will run the whole sequence of operations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement the base builder functionality and make it extendable throughout the sub-classes - &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The “return next” method is the tool to iterate over the order of functions we will define in the next step. We are checking whether there is a next function in the order after the one we are calling the returnNext from and call it, if no, then simply return the accumulated data.&lt;br&gt;
The “parseMethodName” tool is a way to extract only the caller method name from the full name, containing the class name and namespace as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WpqW6N-r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c96t4970biwuayvarctz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WpqW6N-r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c96t4970biwuayvarctz.png" alt="Base chain builder definition" width="880" height="1455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create individual builders - set up the individual methods that carry the logic, as well as assign the variable order of execution. In the following example, we will be building the parameters to pass to a placeholder class to create a car. The focus will be on the building process of the parameters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The data array is a way to build up data  throughout all stages of the chain&lt;br&gt;
The params array is the configuration or external parameters, based on which we can make decisions in the builders&lt;br&gt;
Aside from these inputs, the only other thing we need to take care of is to pass the name of the current caller method, so we would extract our current and next builder&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jl220MzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afvhceixopjbep1frld3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jl220MzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afvhceixopjbep1frld3.png" alt="Concrete chain builder definition" width="880" height="1223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we have defined our control structures all we have left is to call for them and wait&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VVXWKqWw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bkruz5hunvo6wr8rp0sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VVXWKqWw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bkruz5hunvo6wr8rp0sk.png" alt="Client code to create object" width="880" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a structure I started working with relatively soon, so even if I thought of some components to this pattern by myself, this is nothing between some different patterns. I still lack the base for comparison to say whether this will be any good, viable, bad, or actually useful. &lt;br&gt;
I would like to open a discussion about this, if you have any thoughts I would really appreciate any feedback. And if you can help me optimize it, let's go for it!&lt;/p&gt;

</description>
      <category>designpatterns</category>
      <category>php</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Networking in AWS: Demystifying the Basics - Part 1</title>
      <dc:creator>Stanislav Ivanov</dc:creator>
      <pubDate>Sat, 11 Mar 2023 18:59:37 +0000</pubDate>
      <link>https://forem.com/stanivanov19/networking-in-aws-demystifying-the-basics-part-1-4lkk</link>
      <guid>https://forem.com/stanivanov19/networking-in-aws-demystifying-the-basics-part-1-4lkk</guid>
      <description>&lt;p&gt;This is going to be the first in a series of many articles about what it means to set up and configure a virtual private cloud in AWS. When it comes to the topic of networking, there are simply far too many details that can run too deep for a beginner to understand. This article will start by defining the main building blocks of cloud networking (in this case it will be AWS) and their inter-dependencies&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS VPC
&lt;/h1&gt;

&lt;p&gt;Amazon Web Services (AWS) Virtual Private Cloud (VPC) is a powerful tool that lets you create a virtual network in the cloud. A VPC allows you to create a logically isolated group within your AWS account, that can consist of other sub-groups (or later called subnets), IP addresses, gateways, and of course resources you can place within these groups so that you can secure and protect them under unified conditions. You are not limited to creating only one or a few in a single account. By default, when you create and set up your account a VPC is created, but you can create or shut down VPCs on demand. The private clouds can be isolated from each other, in case you have more than one, or with the help of other services they can be connected, so they could communicate.&lt;/p&gt;

&lt;h1&gt;
  
  
  VPC Subnets
&lt;/h1&gt;

&lt;p&gt;Subnets are logical divisions (or sub-groups if you want to call them that) within a VPC that allow you to break down your network into smaller, more manageable pieces. Each subnet is identified by a unique IP address range called a CIDR block. For me, CIDR and IPs are one of the most complicated topics about cloud networking, especially when I started meeting these terms more and more often. In a private group like a VPC, you must determine the range of private IPs (IPv4 and/or IPv6) you will allow within your group. The private IPs are not accessible from the internet, isolating the resources associated with them unless they also have a public IP.&lt;/p&gt;

&lt;p&gt;There are two types of subnets available - public and private. As the name suggests the public subnet type allows internet access to all resources placed within its limits. All of these resources are associated with a public IPv4/6 address. Whereas the private subnets cut off direct access to the internet, and all resources or instances do not have an assigned public IP.&lt;/p&gt;

&lt;p&gt;Getting back to CIDR, what it is, and why it matters, CIDR notation is used to define the IP address range for a subnet, with a number following a slash (/) that represents the number of bits in the network ID. It is just a tool to help you group and somewhat limit the number of IPs you can have in your group/sub-groups better and tighter under a specific IP range. Let's use the following example to illustrate how it would work out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A CIDR range of 10.0.0.0/16 means that the VPC or Subnet can only allocate IPS from 10.0.0.0 to 10.0.255.255, effectively limiting the number of possible IPs in the subnet or VPC. Its a form of housekeeping for the network, to help prevent IP wastage (Something we will discuss another time)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A CIDR range of 10.0.0.0/24 means we can have IPs from 10.0.0.0 to 10.0.0.255 leaving the pool of IPs even smaller&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When initially setting up your VPC you can simply set up an easy CIDR, but make sure if you have several VPCs their CIDR ranges do not overlap. With your subnets it's pretty similar, you need to define CIDR blocks for them as well, but they need to be included in the main VPC CIDR range, as it acts like one big group of IPs that can exist.&lt;/p&gt;

&lt;h1&gt;
  
  
  IP Addresses
&lt;/h1&gt;

&lt;p&gt;Each device on your network requires a unique IP address. Within a VPC, you can assign IP addresses to individual instances using Elastic IP addresses or the VPC DHCP options set. You can also assign private IP addresses to virtual machines, database instances, or any resource really, which is not publicly routable over the internet.&lt;/p&gt;

&lt;p&gt;I understand all of this information is overwhelming, but you need to remember this is not something you need to understand from the get-go. This is going to be a step-by-step process to understand how networking works, and also a chance for us to get into how would different networking architectures can work, how are they different and why would we need to concern ourselves with them. This is a complex topic, but remember this is only us defining the basic tools in our toolset, soon we will start using them and all of it will make sense.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
