<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: mohamed ahmed</title>
    <description>The latest articles on Forem by mohamed ahmed (@mohamedahmed00).</description>
    <link>https://forem.com/mohamedahmed00</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mohamedahmed00"/>
    <language>en</language>
    <item>
      <title>Active record vs Object mapper ORM</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Thu, 28 Sep 2023 15:34:28 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/active-record-vs-object-mapper-orm-1gpg</link>
      <guid>https://forem.com/mohamedahmed00/active-record-vs-object-mapper-orm-1gpg</guid>
      <description>&lt;p&gt;Object mapper and active record are two popular design patterns for object relational mapping (ORM). ORMs are used to map objects in code to relational databases. this allows developers to interact with the database in a more object oriented way, without having to write SQL queries.&lt;br&gt;
The key difference between object mapper and active record is the level of abstraction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object mapper:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Object Mapper is an architectural pattern that maps objects to database tables.&lt;/li&gt;
&lt;li&gt;It separates the business logic from the persistence layer, making the code more maintainable and testable.&lt;/li&gt;
&lt;li&gt;It provides a layer of abstraction between the application and the database, allowing developers to work with objects instead of writing sql queries directly.&lt;/li&gt;
&lt;li&gt;It can handle complex relationships between objects and perform database operations using methods like create, read, update, and delete (CRUD).&lt;/li&gt;
&lt;li&gt;It is generally used in frameworks, such as Hibernate in Java or entity framework in dot net or Doctrine in php, provide features like lazy loading, and query optimization.&lt;/li&gt;
&lt;li&gt;It generally offers better performance due to its optimized query generation.&lt;/li&gt;
&lt;li&gt;It supports multiple databases by providing database agnostic querying and data mapping capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Active record:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Active record is an object relational mapping pattern used in frameworks like ruby on rails, laravel and etc.&lt;/li&gt;
&lt;li&gt;It tightly couples the business logic with the database by representing each database table as a class and each row as an instance of that class.&lt;/li&gt;
&lt;li&gt;It provides a simple and intuitive interface for performing database operations, as developers can directly manipulate objects to create, read, update, and delete records.&lt;/li&gt;
&lt;li&gt;It automatically generates sql queries based on the object's state, reducing the need for manual sql query writing.&lt;/li&gt;
&lt;li&gt;It simplifies the development process by handling the mapping between objects and database tables transparently.&lt;/li&gt;
&lt;li&gt;It is suitable for smaller projects or applications where simplicity and rapid development are prioritized.&lt;/li&gt;
&lt;li&gt;It may have performance limitations due to its automatic query generation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When choosing between object mapper and active record, it is important to consider the specific needs of your application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;if you need a highly flexible and scalable ORM, then object mapper is the better choice.&lt;/li&gt;
&lt;li&gt;if you need a simple and easy to use ORM, then active record is the better choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some examples of when you might want to choose one pattern over the other:&lt;br&gt;
&lt;strong&gt;Object mapper:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need a highly flexible and scalable ORM.&lt;/li&gt;
&lt;li&gt;You need to support complex database queries and relationships.&lt;/li&gt;
&lt;li&gt;You have a team of experienced developers who can implement and maintain an object mapper.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Active record:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need a simple and easy to use ORM.&lt;/li&gt;
&lt;li&gt;You are developing a small or medium sized application.&lt;/li&gt;
&lt;li&gt;You have a team of developers who are new to ORMs or who are more familiar with active record.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, an object mapper separates the database logic from the business logic, providing a more flexible and abstract way to work with databases. on the other hand, active record tightly couples the business logic with the database, simplifying the development process but sacrificing some flexibility. The choice between them depends on the specific requirements and preferences of the project.&lt;/p&gt;

</description>
      <category>database</category>
      <category>softwaredevelopment</category>
      <category>java</category>
      <category>php</category>
    </item>
    <item>
      <title>RabbitMQ vs Kafka key differences</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Tue, 08 Aug 2023 08:26:54 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/rabbitmq-vs-kafka-key-differences-1jdb</link>
      <guid>https://forem.com/mohamedahmed00/rabbitmq-vs-kafka-key-differences-1jdb</guid>
      <description>&lt;p&gt;The most common two ways to implement communication between services is through RabbitMQ and Kafka.&lt;br&gt;
what are they and how do they differ from each other?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is RabbitMQ?&lt;/strong&gt;&lt;br&gt;
RabbitMQ is a message broker that implements the advanced message queuing protocol (AMQP), which is a standard for messaging interoperability. it is based on a broker queue model, where producers send messages to exchanges, and consumers receive messages from queues. RabbitMQ can route messages based on various criteria, such as topic, headers, fan-out or direct. RabbitMQ is designed for flexibility and reliability.&lt;br&gt;
AMQP is an open standard for passing business messages between applications or organizations. AMQP allows you to be platform agnostic. AMQP enables message passing through TCP/IP connections while only allowing binary data to be sent across it. Some features that AMQP offers are its ability for message queuing, reliability and routing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Kafka?&lt;/strong&gt;&lt;br&gt;
Kafka is an event streaming platform. according to Kafka docs, they define event streaming as: event streaming is the practice of capturing data in real time from event sources like databases, mobile devices, cloud services, and software applications in the form of streams of events, storing these event streams durably for later retrieval, manipulating, processing, and reacting to the event streams in real-time as well as retrospectively and routing the event streams to different destination technologies as needed.&lt;br&gt;
event streaming is the process of gathering a bunch of data from a lot of points producing certain events and then storing and processing this data based on our needs. Kafka is event based and uses streams: streams can be thought of as a huge pipeline of infinite data.&lt;br&gt;
Kafka provides one with the ability to publish an event to a data stream, store these streams of events, process these streams of data and finally subscribe to these streams. Kafka is a distributed system consisting of servers and clients communicating over the TCP protocol.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push vs Pull Based Messaging&lt;/strong&gt;&lt;br&gt;
RabbitMQ uses something called as a smart producer, which means the producer of the data decides when to create it. prefetch limit is set on the consumers end to stop from overwhelming the consumer. such a push based system means that there is almost a FIFO structure in the queue. it is almost because some messages could be processed faster than others leading to an almost in order queue.&lt;/p&gt;

&lt;p&gt;Kafka on the other hand uses a smart consumer. which means that the consumer has to request the messages that it wants from the stream. Kafka also allows one to set the time offset which a consumer wants the producer to take into account while producing messages. this means that all consumers can consume and process events at their own pace. one benefit of this pull system is that consumers can easily be added at any time and the application can be scaled to implement new services without any changes to Kafka.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Queues vs Topics&lt;/strong&gt;&lt;br&gt;
RabbitMQ is a basic queue data structure, the messages are added to the end of the queue called an exchange and are consumed from the top of the queue. in order to route messages in RabbitMQ message exchanges are used. there are different messaging patterns that RabbitMQ has for different use cases: direct, topic, headers and fanout. in a direct exchange, messages are routed based on the exact routing key of the message. the header pattern which ignores the routing key and instead uses the message headers to decide where to send the message. the topic message pattern which route using the routing key like the direct pattern but they allow two wildcards. These two wildcards are * (matches one word) and # matches any number of words. the fanout pattern in which messages are sent to the fanout exchange and these events are then broadcasted to other exchanges and queues which are subscribed to the exchange.&lt;/p&gt;

&lt;p&gt;Kafka uses topics, topics can be described as a folder in a file system and whereas each event can be considered a file. there can be zero, one or multiple producers of events and zero, one or multiple consumers, events in Kafka aren’t deleted when consumed and you can set how long Kafka should retain your topics.&lt;br&gt;
each topic in Kafka can have multiple partitions, each partition can be looked at as buckets. when producing events, each partition’s key can be matched to see if some event should be added to that particular partition. events with the same event key are written always to the same partition and Kafka guarantees that any consumer of a given partition will consume the events from that partition in order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some other quick differences&lt;/strong&gt;&lt;br&gt;
Events in Kafka can be replayed since they are not deleted as soon as they are consumed whereas events in RabbitMQ cannot be replayed since they are deleted as soon as they are consumed. Kafka can process a lot of the data in its streams in order for the consumers to consume whereas RabbitMQ does not provide the functionality to process data in its queue.&lt;br&gt;
in RabbitMQ it is possible to specify message priorities and to consume messages based on the priority provided for each message. RabbitMQ supports creating a priority queue whereas there isn’t such a functionality in kafka. it can achieve high throughput with limited resources, a necessity for big data use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So what are their use cases?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RabbitMQ:&lt;/strong&gt;&lt;br&gt;
complex routing: it is very easy to route RabbitMQ messages based on routing keys. if there is a requirement for routing messages based on a few criteria, RabbitMQ’s patterns can be used to achieve routing.&lt;br&gt;
long running processes: RabbitMQ can be preferred in cases where there might be long running tasks. This is because there isn’t usually a need for Kafka’s strengths of storing and processing event data or replaying data. A queue of processes that need to get done satisfy this use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kafka:&lt;/strong&gt;&lt;br&gt;
log aggregation: Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. this allows for lower-latency processing and easier support for multiple data sources and distributed data consumption.&lt;br&gt;
Stream processing: many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.&lt;br&gt;
high activity: Kafka can be preferred to be used for high volume data ingestions from IOT devices and other data points which are consistently producing a lot of events.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>rabbitmq</category>
      <category>microservices</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Distributed system communication styles</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Thu, 15 Jun 2023 00:03:26 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/distributed-systems-communication-style-481a</link>
      <guid>https://forem.com/mohamedahmed00/distributed-systems-communication-style-481a</guid>
      <description>&lt;p&gt;In the distributed systems, each subsystem is running on several different machines, and each service is a component or process of an enterprise application. that means these services at the multiple machines must handle requests from the clients of this enterprise application. sometimes all these services collaborate to handle those requests. so all services interact with each other. in case of the monolithic application, all components are the part of the same application and run on the same machine. so, monolithic application doesn’t require this complexity in interaction between services.&lt;br&gt;
so we can classify this communications into two approaches like the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Synchronous communication style.&lt;/li&gt;
&lt;li&gt;Asynchronous communication style.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Synchronous communication style:&lt;/strong&gt;&lt;br&gt;
in this communication style, the client service expects a response within time and wait for a response by blocking the request. this style can be used by simple using http protocol usually via rest. it is the simplest possible solution for synchronous communication to interact with services. the client can make a rest call to interact with other services. the client sends a request to the server and waits for a response from the service. &lt;br&gt;
the synchronous communication approach does have some drawbacks, such as timeouts and strong coupling. but we can avoid coupling problem by using circuit breaker pattern.&lt;br&gt;
we can use GRPC instead of rest. &lt;br&gt;
this style good in some situations like if we need to pay order before we receive success page so order service must wait payment service until receive response from it to continue the order life cycle.&lt;/p&gt;

&lt;p&gt;this style have the following tradeoffs:&lt;br&gt;
1- couple between services.&lt;br&gt;
2- block requests cause latency between services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zl4dOuld--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0pvw27ws8b36rb5xek94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zl4dOuld--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0pvw27ws8b36rb5xek94.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous communication style:&lt;/strong&gt;&lt;br&gt;
in this communication style, the client service doesn’t wait for the response coming from another service. the client doesn’t block request while it is waiting for a response from the server. this type of communications is possible by using lightweight messaging brokers. the message producer service doesn’t wait for a response. it just generates a message and sends message to the broker, it waits for the only acknowledgement from the message broker to know the message has been received by a message broker or not.&lt;br&gt;
there are various tools to support lightweight messaging, you just choose one of the following message brokers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RabbitMQ&lt;/li&gt;
&lt;li&gt;Apache Kafka&lt;/li&gt;
&lt;li&gt;ActiveMQ&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;lets take real use case for this communication style, if user placed order, user should receive confirmation email and sms, so order service will push 2 message to queue one for email service and second for sms service, once order service received acknowledge from the queue it will continue executing this request and the other service will receive this messages and handle this messages eventually. if sms and email service are down, all messages exist in queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CpEDzVnl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94gywvbxqwofgzl4ikns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CpEDzVnl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94gywvbxqwofgzl4ikns.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>distributedsystems</category>
      <category>rest</category>
      <category>kafka</category>
    </item>
    <item>
      <title>what is Ubiquitous Language</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Sun, 28 May 2023 18:24:44 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/what-is-ubiquitous-language-1h0o</link>
      <guid>https://forem.com/mohamedahmed00/what-is-ubiquitous-language-1h0o</guid>
      <description>&lt;p&gt;Ubiquitous language is the term that eric evans uses in “domain driven design tackling complexity in the heart of software” to build a language shared by the team, developers, domain experts, and other participants. domain experts and software developers work together to build a common language for the business areas being developed. the effort involved in building the ubiquitous language helps spread deep domain insight among all team members. bounded context is a conceptual boundary around a system. the ubiquitous language inside a boundary has a specific contextual meaning. concepts outside of this context can have different meanings. its describe something in specific context.&lt;/p&gt;

&lt;p&gt;so, how to find, explore and capture this very special language, we can follow this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identify key business processes, their inputs, and their outputs&lt;/li&gt;
&lt;li&gt;create a glossary of terms and definitions &lt;/li&gt;
&lt;li&gt;capture important software concepts with some kind of documentation&lt;/li&gt;
&lt;li&gt;share and expand upon the collected knowledge with the rest of the team (developers and domain experts)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ubiquitous language used in discussion between developers and business people, it will appear in domain model , entity and value objects.&lt;/p&gt;

</description>
      <category>ddd</category>
      <category>softwareengineering</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>what is micro-services architecture</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Sat, 18 Mar 2023 17:13:49 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/what-is-micro-services-architecture-21i8</link>
      <guid>https://forem.com/mohamedahmed00/what-is-micro-services-architecture-21i8</guid>
      <description>&lt;p&gt;A micro-services architecture is a type of application architecture where the application is developed as a collection of services. each performing discrete and well-defined functions and being deployed independently. they are easier to understand, easier to deploy, and can be shared across different systems.&lt;/p&gt;

&lt;p&gt;In terms of architecture, there are two sorts of application systems: the first is a monolith and the second is a distributed system like a micro-services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monoliths:&lt;/strong&gt;&lt;br&gt;
An application is composed of three components: a server, a client, and a database. the server processes client requests, implements business logic, collects or stores data in the database, and responds to the client.&lt;br&gt;
A monolith is a straightforward approach to building a server application. they're made up of classes and functions that let you combine all of your business logic into one running process.&lt;/p&gt;

&lt;p&gt;Monolithic systems are ideal for launching a business since they are self-sufficient, offer fast communications their requirements are easy to define and fast development cycle. however, when a business grows, monolithic systems become a challenge. they have low scalability, adaptability and require extensive maintenance. even small errors at any point cause the entire system to crash. they do not allow various teams to work independently and are constrained to be developed on a single technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Micro-services&lt;/strong&gt;:&lt;br&gt;
The micro-service architecture is the solution to the challenges faced by monolithic systems (scaling problems). this system consists of multiple micro-services serving unique services.&lt;br&gt;
Each service is built on :-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business domain centric.&lt;/li&gt;
&lt;li&gt;High cohesion.&lt;/li&gt;
&lt;li&gt;Automation (dev-ops cycle).&lt;/li&gt;
&lt;li&gt;Observable.&lt;/li&gt;
&lt;li&gt;Resilience.&lt;/li&gt;
&lt;li&gt;Autonomy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Micro-services are built on specialized business capabilities. they have their own running process and can even have their own data storage unit. different micro-services can employ different programming languages, different database approaches, and be deployed independently. different teams in the company work on specific services, and updates result in newer versions of the affected micro-services rather than the entire system. and at any point of error, only the affected service will fail. the rest of the programs will work just fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why micro-service architecture?&lt;/strong&gt;&lt;br&gt;
lets take example:- &lt;br&gt;
if we have e-commerce platform that receive two million request per second. after we did some analysis of this traffic we found 400K request reach the product page. 100K request reach cart page. only 20K reach checkout page. 1000 request that place the order.&lt;br&gt;
in monoliths application we can do this scale if we design it to be stateless app but we scale whole the system at all. so this cost will be high. deployment will be challenge in this case.&lt;br&gt;
in micro-services, we can scale each service standalone, product service not like the checkout service, so we can scale product horizontally with 50 replica. and checkout with 2 replica. note this number not real it just example.&lt;br&gt;
in this case the cost of deployment will be fair.&lt;/p&gt;

&lt;p&gt;Many large corporations, such as Amazon, Netflix, Twitter, and Paypal, which began as monolithic systems, were at the forefront of this transformation. its modular approach in application systems is widely being adopted to handle complex applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of using micro-services:-&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have the freedom to use different programming languages, tools, technologies, and data-storing strategies based on the requirements of your service.&lt;/li&gt;
&lt;li&gt;Every micro-service is isolated, results in fast defect isolation too, failure in one micro-services won't affect the rest of the system.&lt;/li&gt;
&lt;li&gt;You can easily integrate or transition to new technology for a specific micro-service without affecting the whole system.&lt;/li&gt;
&lt;li&gt;Micro-services are simpler to understand for new developers as compared to understanding the whole system.&lt;/li&gt;
&lt;li&gt;Different teams in an organization have the freedom to work independently without colliding with other teams.&lt;/li&gt;
&lt;li&gt;A service can be shared among different products, this saves a lot of resources for an organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Drawbacks of using micro-services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Troubleshooting can be a very tedious task in micro-services architecture.&lt;/li&gt;
&lt;li&gt;Handling the entire product might get difficult as the number of services increases.&lt;/li&gt;
&lt;li&gt;Developer needs to design and develop a lightweight, robust and secure communication system for micro-services to communicate with each other.&lt;/li&gt;
&lt;li&gt;Internal communication between micro-services increases the time taken by servers to respond to clients.&lt;/li&gt;
&lt;li&gt;Micro-service add complexity to the project, if the project scope not scale like the above example dont move to micro-service.&lt;/li&gt;
&lt;li&gt;Time of development, not good if we need to make a prototype to test the business demand for an idea. we can start monolithic and move to Micro-service letter on.&lt;/li&gt;
&lt;li&gt;Communication between services is complex, everything is now an independent service, you have to carefully handle requests traveling between your modules. In one such scenario, developers may be forced to write extra code to avoid disruption. Over time, complications will arise when remote calls experience latency.&lt;/li&gt;
&lt;li&gt;More services equals more resources, multiple databases and transaction management can be painful.&lt;/li&gt;
&lt;li&gt;Use in large scale application if your problem not in scale dont move to it, to avoid adding alot of complexity to the project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Micro-services development tools and technology:-&lt;/strong&gt;&lt;br&gt;
As micro-services architecture gained popularity, more and more tools and technologies have emerged to support this practice and improve the developer experience. here are some of the best technologies for developing micro-services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Containerization:-&lt;br&gt;
one of the most key traits of a micro-service is that it operates independently, as autonomously as possible. To maintain their autonomy, they must be kept in an isolated environment with their own runtime environment. containerization services such as docker, Kubernetes, and others make this feasible. through autonomous container-based micro-service architecture, you have the freedom to add, remove, scale, and replicate components as required by your business.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Api management&lt;br&gt;
With an increasing group of micro-services comes the challenge of establishing safe connections between them. You certainly cannot risk exposing any of your micro-services to public networks. API management, using services like AWS API Gateway, Azure API Management, reduces the time required to build and manage API connections between micro-services. they have various capabilities like authentication services, API monitoring, and so on, which may save developers months of time and work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Continuous integration, continuous deployment (CI/CD)&lt;br&gt;
If you're adopting micro-services architecture, you don't want a lengthy and costly release train where each team needs to wait their turn. You also don't want the release of one service to impact or be impacted by the release of another. You need to employ continuous integration and continuous deployment pipelines for your micro-services. You can use CI/CD platforms, such as Jenkins and AWS CodePipeline, that provide easy to set up automated but high-velocity releases for your micro-services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application performance monitoring (APM) tools&lt;br&gt;
your micro-service is up and running after you design, implement, test, and deploy it, but you're not done yet. You must monitor system performance in order to improve user retention and rid your systems of faults and bottlenecks. Application performance monitoring solutions, such as AWS Cloudwatch, Kibana, Grafana allow you to effortlessly monitor all of your micro-services, as well as increase your mean-time-to-repair by detecting problems sooner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud providers:-&lt;br&gt;
Managing multiple micro-service can be very challenging, you need a lot of containers, API management tools, host multiple databases, CI/CDs, Application performance monitoring tools, etc. But no worry, most cloud service providers like AWS, Azure, IBM, etc. provide all of this on their platform, which eventually makes managing micro-services lot simple. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Event driven architecture</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Thu, 16 Jun 2022 01:36:39 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/event-driven-architecture-40ob</link>
      <guid>https://forem.com/mohamedahmed00/event-driven-architecture-40ob</guid>
      <description>&lt;p&gt;event driven architecture is a pattern of architecture for applications following the tips of production, detection, consumption and reaction to events. it is possible to describe an event as a change of state. for example, if a device is shutdown and somebody opens it, the state of the device changes from shutdown to opened. the service to open the device has to make this change like an event, and that event can be known by the rest of the services.&lt;/p&gt;

&lt;p&gt;an event notification is a message that was produced, published, detected, or consumed asynchronously and it is the status changed by the event. it is important to understand that an event does not move around the application, it just happens. the term event is a little controversial because it usually means the message event notification instead of the event, so it is important to know the difference between the event and the event notification. this pattern is commonly used in applications based on components or microservices because they can be applied by the design and implementation of applications. an application driven by events has event creators and event consumers.&lt;/p&gt;

&lt;p&gt;an event creator is the producer of the event, it only knows that the event has occurred, nothing else. then we have the event consumers, which are the entities responsible of knowing that the event was fired. the consumers are involved in processing or changing the event. the event consumers are subscribed to some kind of middleware event manager which, as soon as it receives notification of an event from a creator event, forwards the event to the registered consumers to be taken by them.&lt;br&gt;
developing applications as microservices around an architecture such as eda allows these applications to be constructed in a way that facilitates more responsiveness because the eda applications are ready to be in unpredictable and asynchronous environments.&lt;br&gt;
the advantages of using eda are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;uncoupling systems: the creator service does not need to know the rest of the services, and the rest of the services do not know the creator. so it allows it to uncouple the system.&lt;/li&gt;
&lt;li&gt;interaction publish/subscribe: eda allows many to many interactions. where the services publish information about some event and the services can get that information and do what is necessary with the event. so, it enables many creator
events and consumer events to exchange status and respond to information in real time.&lt;/li&gt;
&lt;li&gt;asynchronous: eda allows asynchronous interactions between the services. so they do not need to wait for an immediate response and it is not mandatory to have a connection working while they are waiting for the response.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>eventdriven</category>
      <category>rabbitmq</category>
      <category>bus</category>
      <category>microservices</category>
    </item>
    <item>
      <title>what is Domain Driven Design ( ddd )</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Wed, 15 Jun 2022 01:13:58 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/what-is-domain-driven-design-ddd--412i</link>
      <guid>https://forem.com/mohamedahmed00/what-is-domain-driven-design-ddd--412i</guid>
      <description>&lt;p&gt;domain driven design (ddd) is an approach for the development when it has complex needs.&lt;br&gt;
this concept is not new, it was created by eric evans in his book with the same title in 2004, but now it is very common in huge projects.&lt;/p&gt;

&lt;p&gt;eric evans introduced some concepts that are necessary to understand to learn how domaindriven design works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;context: this is the setting in which a word or statement appears that determines its meaning.&lt;/li&gt;
&lt;li&gt;domain: this is a sphere of knowledge, influence, or activity. the subject area to which the user applies a program is the domain of the software.&lt;/li&gt;
&lt;li&gt;model: this is a system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain.&lt;/li&gt;
&lt;li&gt;ubiquitous language: this is a language structured around the domain model and used by all team members to connect all the activities of the team with the software.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the software domain is not related to the technical terms, programming or computers in any way. in most projects, the most challenging part is to understand the business domain, so ddd suggests using a model domain. this is abstract, ordered, and selective knowledge reproduced in a diagram, code, or just words.&lt;/p&gt;

&lt;p&gt;the model domain is like the roadmap to build projects with complex functionalities, and it is necessary to follow five steps to achieve it. These five steps need to be agreed on by the development team and the domain expert:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;brainstorming and refinement: there should be a communication channel between the development team and the domain expert. so, all the people in the project should be able to talk to everyone because they all need to know how the project should work.&lt;/li&gt;
&lt;li&gt;draft domain model: during the conversation, it is necessary to start drawing a draft of the domain model, so that it can be checked and corrected by the domain expert until they both agree.&lt;/li&gt;
&lt;li&gt;early class diagram: using the draft, we can start building an early version of the class diagram.&lt;/li&gt;
&lt;li&gt;simple prototype: using the draft of the early class diagram and domain model, it is possible to build a very simple prototype. Evans suggests avoiding things that are not related to the domain to ensure that the domain business was
modeled properly. it can be a very simple program as a trace.&lt;/li&gt;
&lt;li&gt;prototype feedback: the domain expert interacts with the prototype in order to check whether all the needs are met and then the entire team will improve the model domain and the prototype.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;the model, code, and design must evolve and grow together. they cannot be unsynchronized at all. if a concept is updated on the model, it should also be updated on the code and on the design automatically, and the same goes for the rest.&lt;br&gt;
a model is an abstraction system that describes selective concepts of a domain and it can be used to resolve problems related to that domain. if there is a piece of the model that is not reflected in the code, it should be removed.&lt;br&gt;
finally, the domain model is the base of the common language in a project. this common language in ddd is called ubiquitous language and it should have the following things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;class names and their functions related to the domain.&lt;/li&gt;
&lt;li&gt;terms to discuss the domain rules included on the model.&lt;/li&gt;
&lt;li&gt;names of analysis and design patterns applied to the domain model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the ubiquitous language should be used by all the members of the project, including developers and domain experts, so the developers should be able to describe all the tasks and functions.&lt;br&gt;
it is absolutely necessary to use this language in all the discussions between the team, such as meetings, diagrams, or documentation, but this language was not born in the first iteration of the process, meaning that it can take many iterations of refactoring having the model, language, and code synchronized.&lt;br&gt;
if, for example, the developers discover that a class from the domain should be renamed, they cannot refactor this without refactoring the name on the domain model and the ubiquitous language.&lt;br&gt;
the ubiquitous language, domain model, and code should evolve together as a single knowledge block.&lt;br&gt;
There is controversial concept on ddd. eric evans says that it is necessary for the domain expert to use the same language as the team, but some people do not like this idea.&lt;br&gt;
usually, the domain experts do not have knowledge of object oriented concepts because they are too abstract for non developers. anyway, ddd says that if the domain expert does not understand the domain model, it is because there is something wrong with it.&lt;br&gt;
there are diagrams in the domain model, but evans suggests using text as well, because diagrams do not explain the concepts properly. also, the diagrams should be superficial. if you want to see more details you have the code for it.&lt;/p&gt;

&lt;p&gt;some projects are affected by the connection between the domain model and the code. this happens because there is a division between analysis and design.&lt;br&gt;
the analysts make a model independent of the design and the developers cannot develop the functionalities because some information is missing. in addition, they cannot talk with the domain expert.&lt;br&gt;
the development team will not follow the model and, in the end, the domain model will not be updated and it will not work. therefore, the project will not meet the requirements.&lt;br&gt;
to sum up, ddd works to achieve the software development as an iterative process of refinement of the model, design, and code as a single task in a block.&lt;/p&gt;

</description>
      <category>ddd</category>
      <category>architecture</category>
      <category>microservices</category>
      <category>soa</category>
    </item>
    <item>
      <title>Scatter gather pattern ?</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Sun, 01 May 2022 14:08:18 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/scatter-gather-pattern--3a1p</link>
      <guid>https://forem.com/mohamedahmed00/scatter-gather-pattern--3a1p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6ITTVp3o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/447ms5x0p4tfcnfky4ju.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6ITTVp3o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/447ms5x0p4tfcnfky4ju.gif" alt="Image description" width="377" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scatter gather pattern is ideal for requesting responses from multiple subsystem, then use aggregator to collect the responses and merge them into a single response message.&lt;br&gt;
the scatter gather pattern is a composite pattern that illustrates how to broadcast a message to multiple recipients and re aggregate the responses back into a single message.&lt;/p&gt;

&lt;p&gt;example, in the context of order processing. each order item that is not currently in stock could be supplied by one of multiple external suppliers. However, the suppliers may or may not have the respective item in stock; they may charge a different price and may be able to supply the part by a different date. to fill the order in the best way possible, quotes are requested from all suppliers and then a decision is made as to which one provides the best term for the requested item.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>microservices</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Command Query Responsibility Segregation (CQRS)</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Sat, 16 Apr 2022 20:17:38 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/command-query-responsibility-segregation-cqrs-p6c</link>
      <guid>https://forem.com/mohamedahmed00/command-query-responsibility-segregation-cqrs-p6c</guid>
      <description>&lt;p&gt;CQRS standing for command query responsibility segregation. it divides a system actions into commands ( writing ) and queries ( reading ), seeks an even more aggressive separation of concerns splitting the model in two:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the write model:
also known as the command model, it performs the writes and takes responsibility for the true domain behavior.&lt;/li&gt;
&lt;li&gt;the read model:
it takes responsibility of the reads within the application and treats them as something that should be out of the domain model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;every time someone triggers a command to the write model, this performs the write to the desired data store. additionally, it triggers the read model update, in order to display the latest changes on the read model.&lt;br&gt;
this strict separation causes another problem "eventual consistency".&lt;br&gt;
the consistency of the read model is now subject to the commands performed by the write model. in other words, the read model is eventually consistent. that is, every time the write model performs a command, it will pull up a process that will be responsible for updating the read model according to the last updates on the write model.&lt;br&gt;
think about a caching system in front of a web application. every time the database is updated with new information, the data on the cache layer may potentially be invalid, so every time it gets updated, there should be a process that updates the cache system. cache systems are eventually consistent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nzKeBv6D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucdkp8b254iyf8jmfsrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nzKeBv6D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucdkp8b254iyf8jmfsrm.png" alt="Image description" width="637" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;pros:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;separating write activity from ready activities allows you to use the best database technology for the task at hand, for example, sql database for writing and a nosql database for reading.&lt;/li&gt;
&lt;li&gt;read activity tends to be more frequent than writing, thus you can reduce response latency by placing read data sources in strategic geolocations for better performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;cons:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;supporting the CQRS pattern requires expertise in a variety of database technologies.&lt;/li&gt;
&lt;li&gt;using the CQRS patterns means that more database technologies are required hence there is more inherent cost either in terms of hardware or if a cloud provider is used.&lt;/li&gt;
&lt;li&gt;ensuring data consistency requires special consideration.&lt;/li&gt;
&lt;li&gt;using a large number of databases means more points of failure.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ddd</category>
      <category>cqrs</category>
      <category>microservices</category>
      <category>database</category>
    </item>
    <item>
      <title>What is RESTFUL API ?</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Fri, 04 Mar 2022 17:05:54 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/what-is-restful-api--4c5o</link>
      <guid>https://forem.com/mohamedahmed00/what-is-restful-api--4c5o</guid>
      <description>&lt;p&gt;RESTFUL api stand for representational state transfer is the name of the method used to communicate with the apis. as the name suggests, it is stateless. in other words, the services do not keep the state that transferred, so if you call api sending data, the same api will not remember the data next time you call it. The state is kept by the client. a good example of this is when a user is logged in and the user is able to call a specific method, so it is necessary to send the user credentials (username and password or token) every time.&lt;br&gt;
creating a restful apis will be easier for you and the consumers if you follow some conventions in order to make them happy. i have been using some recommendations on restful apis and the results were really good. they help to organize your application and its future maintenance needs. also your api consumers will thank you when they enjoy working with your application.&lt;/p&gt;

&lt;p&gt;security in your restful api is important, but it is especially important if your api is going to be consumed by people you do not know, in other words, if it is going to be available to everybody.&lt;br&gt;
use ssl everywhere it is important for the security of your api.&lt;br&gt;
there are many public places without an ssl connection and it is possible to sniff packages and get other peoples credentials.&lt;br&gt;
use token authentication, but it is mandatory to use ssl if you want to use a token to authenticate the users. using a token avoids sending entire credentials every time you need to identify the current user. if it is not possible, you can use oauth2.&lt;/p&gt;

&lt;p&gt;in your restful api is important to follow this standards:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use json everywhere. avoid using xml. if there is a standard for restful apis, it is json. it is more compact, can be easily loaded in web languages and easy to understand by humans.&lt;/li&gt;
&lt;li&gt;use camelCase instead of snake_case; it is easier to read.&lt;/li&gt;
&lt;li&gt;use http status code errors. There are standard statuses for each situation, so use them to avoid explaining every response of your api better.&lt;/li&gt;
&lt;li&gt;include the versioning in the URL, do not put it on the header. The version needs to be in the url to ensure browser exploitability of the resources across versions.&lt;/li&gt;
&lt;li&gt;use Http Method for each situation ( POST, GET, PUT, PATCH, DELETE ).&lt;/li&gt;
&lt;li&gt;avoid using session or save state in the server.&lt;/li&gt;
&lt;li&gt;avoid using bool flages to show success or failed requests, replace it with http status codes.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>php</category>
      <category>graphql</category>
      <category>restful</category>
    </item>
    <item>
      <title>Deadlock in mysql</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Sat, 12 Feb 2022 16:36:44 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/deadlock-in-mysql-2aj5</link>
      <guid>https://forem.com/mohamedahmed00/deadlock-in-mysql-2aj5</guid>
      <description>&lt;p&gt;A deadlock in mysql happens when two or more transactions are mutually holding and requesting locks on the same resources, creating a cycle of dependencies. Deadlocks occur when transactions try to lock resources in a different order. For example, consider these two transactions running against the orders table:&lt;br&gt;
Transaction #1&lt;br&gt;
START TRANSACTION;&lt;br&gt;
UPDATE orders SET price = 50 WHERE id = 2;&lt;br&gt;
UPDATE orders SET price = 60 WHERE id = 6;&lt;br&gt;
COMMIT;&lt;/p&gt;

&lt;p&gt;Transaction #2&lt;br&gt;
START TRANSACTION;&lt;br&gt;
UPDATE orders SET price = 60 WHERE id = 6;&lt;br&gt;
UPDATE orders SET price = 50 WHERE id = 2;&lt;br&gt;
COMMIT;&lt;/p&gt;

&lt;p&gt;If you are unlucky, each transaction will execute its first query and update a row of data, locking it in the process. Each transaction will attempt to update its second row, only to find that it is already locked. The two transactions will wait forever for each other to complete, unless something intervenes to break the deadlock.&lt;/p&gt;

&lt;p&gt;to solve this problem, database systems implement various forms of deadlock detection and timeouts. the InnoDB storage engine will notice circular dependencies and return an error instantly. This can be a good thing otherwise, deadlocks would manifest themselves as very slow queries. others will give up after the query exceeds a lock wait timeout, which is not always good. The way InnoDB currently handles deadlocks is to rollback the transaction that has the fewest exclusive row locks.&lt;/p&gt;

&lt;p&gt;Lock behavior and order are storage engine specific, so some storage engines might deadlock on a certain sequence of statements even though others won’t.&lt;br&gt;
Deadlocks have a dual nature:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;some are unavoidable because of true data conflicts.&lt;/li&gt;
&lt;li&gt;some are caused by how a storage engine works.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deadlocks cannot be broken without rollingback one of the transactions, either partially or wholly. They are a fact of life in transactional systems, and your applications should be designed to handle them. Many applications can simply retry their transactions from the beginning.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>sql</category>
      <category>postgres</category>
    </item>
    <item>
      <title>CAP Theorem</title>
      <dc:creator>mohamed ahmed</dc:creator>
      <pubDate>Wed, 09 Feb 2022 15:15:42 +0000</pubDate>
      <link>https://forem.com/mohamedahmed00/cap-theorem-3idi</link>
      <guid>https://forem.com/mohamedahmed00/cap-theorem-3idi</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jFQ5BbVS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvhptxo7517o7kiszond.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jFQ5BbVS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvhptxo7517o7kiszond.jpeg" alt="Image description" width="473" height="374"&gt;&lt;/a&gt;&lt;strong&gt;What is the CAP Theorem?&lt;/strong&gt;&lt;br&gt;
The CAP theorem is a theorem about distributed computing systems; it has been stated in various forms over the years. The original statement of the theorem by Eric Brewer states that a computer system can at best provide two of the three properties from the following list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Consistency: means that the nodes will have the same copies of a replicated data item visible for various transactions. A guarantee that every node in a distributed cluster returns the same, most recent, successful write. Consistency refers to every client having the same view of the data. There are various types of consistency models. Consistency in CAP refers to sequential consistency, a very strong form of consistency. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability means that each read or write request for a data item will either be processed successfully or will receive a message that the operation cannot be completed. Every non failing node returns a response for all read and write requests in a reasonable amount of time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Partition tolerance means that the system can continue operating if the network connecting the nodes has a fault that results in two or more partitions, where the nodes in each partition can only communicate among each other. That means, the system continues to function and upholds its consistency guarantees in spite of network partitions. Network partitions are a fact of life. Distributed systems guaranteeing partition tolerance can gracefully recover from partitions once the partition heals. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;we can use this data model options for :&lt;br&gt;
1- Consistency and Availability: sqlserver, mysql, postgre.&lt;br&gt;
2- Consistency and Partition tolerance: mangodb,redis.&lt;br&gt;
2- Partition tolerance and Availability: cassandra,couchdb.&lt;/p&gt;

</description>
      <category>database</category>
      <category>mysql</category>
      <category>postgres</category>
      <category>mongodb</category>
    </item>
  </channel>
</rss>
