<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Michał Kurzeja</title>
    <description>The latest articles on Forem by Michał Kurzeja (@mkurzeja).</description>
    <link>https://forem.com/mkurzeja</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mkurzeja"/>
    <language>en</language>
    <item>
      <title>Insights from the PHP Foundation Executive Director</title>
      <dc:creator>Michał Kurzeja</dc:creator>
      <pubDate>Mon, 14 Apr 2025 11:22:51 +0000</pubDate>
      <link>https://forem.com/accesto/insights-from-the-php-foundation-executive-director-33g9</link>
      <guid>https://forem.com/accesto/insights-from-the-php-foundation-executive-director-33g9</guid>
      <description>&lt;p&gt;I recently interviewed &lt;a href="https://www.linkedin.com/in/pronskiy/" rel="nofollow noopener noreferrer"&gt;Roman Pronskiy&lt;/a&gt;, who works at &lt;a href="https://www.jetbrains.com/" rel="nofollow noopener noreferrer"&gt;JetBrains&lt;/a&gt; as the Executive Director of the &lt;a href="https://thephp.foundation/" rel="nofollow noopener noreferrer"&gt;PHP Foundation&lt;/a&gt; and has been a PHP dev since 2010. In this interview, Roman shared his insights and vision for PHP. Roman’s experience offers a unique perspective on the evolution and future of PHP. In this post, I will sum up the topics we discussed, from the Foundation’s milestones and ongoing projects to what lies ahead for the PHP community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxlmu09lhpbzmggobe3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxlmu09lhpbzmggobe3k.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Emergence of the PHP Foundation
&lt;/h2&gt;

&lt;p&gt;Firstly, I wanted to hear directly from Roman how the PHP Foundation came to be: &lt;/p&gt;

&lt;p&gt;“JetBrains hired Nikita Popov around 2018 and worked closely with him on modernizing PHP. They realised that the current model for supporting the PHP language was not sustainable. Despite PHP being widely used across the globe — from small projects to giant enterprises — the responsibility for its maintenance and development fell on the shoulders of only a handful of people. Recognising this imbalance and its potential risks, JetBrains decided to initiate the project, ensuring PHP would have the backing of a larger, more sustainable group of contributors.” — Shares Roman.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Evolution of Confidence in PHP’s Future
&lt;/h2&gt;

&lt;p&gt;I asked Roman about how his perspective has evolved since the PHP Foundation’s inception. His first post on the topic was full of uncertainty about the PHP future because of Nikita leaving, so it was interesting to see the change of perception: “Absolutely, we started because we saw the problem, we needed to act. We did, and now there are no doubts in the PHP's bright future. PHP is turning 30 next year and it’s as good as immortal at that point (laugh). I see a lot of new faces joining PHP and a lot of excitement in the community.”&lt;/p&gt;

&lt;h2&gt;
  
  
  PHP Foundation’s Initial Steps
&lt;/h2&gt;

&lt;p&gt;Naturally, the Foundation's road to where it is now was quite long, so I asked about its humble beginnings: “The Foundation’s primary focus in its early days was straightforward: pay developers to maintain the language.” Roman emphasized, “Right now, we maintain a great pace to deliver big features every year.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Milestones Achieved in Three Years
&lt;/h2&gt;

&lt;p&gt;Later our discussion drifted to the most memorable achievements of the Foundation during those three years. “First two years of PHP Foundation we were figuring out the basics, what we want to achieve, how we would proceed with that goal and what people should be hired for that job.” — said Roman. &lt;/p&gt;

&lt;h3&gt;
  
  
  Property Hooks and PIE Tool
&lt;/h3&gt;

&lt;p&gt;When the Foundation truly started to shine it’s in its third year: “These were much-needed features that nobody wanted to tackle for years,” Roman explained. PIE, a tool designed to simplify the installation of PHP extensions such as Xdebug, significantly enhances the developer experience.&lt;/p&gt;

&lt;p&gt;“Extensions are now very close to being like packages; they basically look like &lt;a href="https://getcomposer.org/" rel="nofollow noopener noreferrer"&gt;Composer&lt;/a&gt; packages. It’s still open to discussion whether PIE will be part of Composer someday. It’s not decided yet, but I hope it will be,” Roman added.&lt;/p&gt;

&lt;p&gt;These and other quality-of-life improvements have made the current state of the language much, much better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Audits
&lt;/h3&gt;

&lt;p&gt;With support from a German government-backed agency, the Foundation conducted extensive security audits, identifying and addressing some flaws. A public report will soon detail these findings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modernizing PHP.net website
&lt;/h3&gt;

&lt;p&gt;PHP Foundation not only focuses on PHP core development but also does its best to ensure the growth of PHP's popularity. It is hard to discuss a language's future without considering how it is seen by new developers joining the market, or returning developers assessing if PHP would be a good choice. &lt;a href="https://www.php.net/" rel="nofollow noopener noreferrer"&gt;PHP.net&lt;/a&gt; is supposed to be the “face” of the language, so we started this discussion there:&lt;/p&gt;

&lt;p&gt;“For years, the official PHP website didn't even mention popular 3rd party tools like Composer that nearly everyone uses. That's finally changing.” Besides that, Roman highlights incremental improvements, including better navigation, interactive code examples, analytics, and a new “Why PHP” page.&lt;/p&gt;

&lt;p&gt;I wanted to know more about the changes that were made to php.net and those that are still in the plans. So I kept asking, and I got some interesting information: &lt;/p&gt;

&lt;p&gt;“Recently, we went through many code examples on the page and checked them for possible vulnerabilities. We would also like to add more marketing to the website, including some case studies of companies that used PHP to great success and other mentions, examples, and statistics of PHP success. If you think about it — their success is also partly ours, and few people know how many great products are built with the help of PHP”&lt;/p&gt;

&lt;p&gt;On a side note, it’s obviously still an open-source project. As I learned during the interview — the top bar of php.net recently was changed, the change itself was suggested by one enthusiast who worked a couple of months on that change, which was later approved. &lt;/p&gt;

&lt;h2&gt;
  
  
  Overcoming Challenges: AI and Performance
&lt;/h2&gt;

&lt;p&gt;I was also wondering what the Foundation identifies as the biggest challenges for PHP right now. Roman shared his thoughts on that topic acknowledging the dual challenges of AI integration and performance:&lt;/p&gt;

&lt;p&gt;“We cannot ignore AI and the industry trends,” he emphasises. “While PHP isn't primarily an AI language, it is crucial to ensure it works seamlessly with AI technologies. This includes improving HTTP interfaces and API handling capabilities. For example, &lt;a href="https://wiki.php.net/rfc/curl_share_persistence_improvement" rel="nofollow noopener noreferrer"&gt;persistent curl handles&lt;/a&gt; were implemented for PHP 8.5, which could significantly improve performance for applications making frequent API calls.”&lt;/p&gt;

&lt;p&gt;On performance, he cites community-driven initiatives like FrankenPHP, which combines PHP with a Go web server. “What we want is to enable the community and unblock them on any issues so they can build amazing things on PHP. We are constantly in direct contact with products like FrankenPHP and if they encounter any issues — we prioritise resolving them”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generics in PHP
&lt;/h2&gt;

&lt;p&gt;The question of Generics in PHP arises basically on every PHP event I went to, clearly, that’s the topic that is on the mind of many PHP developers, so it was worth discussing. Responding to the community’s interest in Generics, Roman explains:&lt;br&gt;
“&lt;a href="https://thephp.foundation/blog/2024/08/19/state-of-generics-and-collections/" rel="nofollow noopener noreferrer"&gt;We’ve invested heavily in researching&lt;/a&gt; how to tackle this. Our goal is to find the best solution for the community.”&lt;/p&gt;

&lt;h2&gt;
  
  
  PHP Foundation plans on scaling
&lt;/h2&gt;

&lt;p&gt;I was interested in knowing if there are any plans to scale the Foundation's size in the near future. &lt;/p&gt;

&lt;p&gt;“I would like to, but that definitely comes with challenges. First and foremost, there is the financial challenge. We are trying to plan the budget at least two years ahead to give our developers a sense of security. Second, as of now, we have 10 developers on board. Any more to that number will most likely require an addition to the leadership part of the team.&lt;/p&gt;

&lt;p&gt;But for now, we would like to hire additional developers only for specific projects for a year or two. A good example would be the delivery of Generics we discussed prior. We also want to experiment with Rust in the PHP core, for that we would need someone extremely experienced in that specific area.”, Roman concludes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring Rust Integration in PHP Core
&lt;/h2&gt;

&lt;p&gt;Intrigued, I wanted to know more about the reasoning for adding Rust to the PHP core. &lt;/p&gt;

&lt;p&gt;“This is something we might like to try in 2025. It’s not about performance; it’s about safety and attracting more engineers. Starting with support for Rust extensions. You will be able to write extensions with Rust, install them with PIE and treat them as regular packages.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Supporting the PHP Foundation
&lt;/h2&gt;

&lt;p&gt;As a PHP developer myself and a co-founder of a company employing more than a dozen other PHP developers, I appreciate the Foundation's work and want to draw more attention to the ways companies or individuals can support it. So, I steered the discussion in this direction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Financial Contributions
&lt;/h3&gt;

&lt;p&gt;“The first and obvious answer to anyone who wants to help the PHP Foundation — &lt;a href="https://thephp.foundation/sponsor/" rel="nofollow noopener noreferrer"&gt;You can always support us financially&lt;/a&gt;. All of our expenses are fully transparent and are &lt;a href="https://opencollective.com/phpfoundation#category-BUDGET" rel="nofollow noopener noreferrer"&gt;published on our website&lt;/a&gt;.” — Roman starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spreading the Word
&lt;/h3&gt;

&lt;p&gt;“Articles, like yours, bring attention to our cause, help other developers and improve the community. We are also always happy to repost any valuable content on our social media to help those who support us.”&lt;/p&gt;

&lt;p&gt;Roman also adds, “If you don’t have a direct line of communication with us — write public feedback, critique, suggestions, or anything that you find important. We would also happily share detailed case studies on how you used PHP to achieve success on our social media, as I mentioned previously — your product success is also partly ours.” &lt;/p&gt;

&lt;h3&gt;
  
  
  Providing Feedback
&lt;/h3&gt;

&lt;p&gt;“Lastly, feedback from people like you helps us a lot. If you think about it, PHP core developers are not PHP developers themselves. They have a rough understanding of what they want to improve in the language, but I would bet that experienced PHP developers like you will have much-needed input for them to consider and address the issues better. &lt;/p&gt;

&lt;p&gt;With the help of analytics on &lt;a href="https://www.jetbrains.com/phpstorm/" rel="nofollow noopener noreferrer"&gt;PhpStorm&lt;/a&gt; and php.net, we are getting some information, but the real feedback is even more helpful. I am constantly in discussions with PHP developers, and I am sure I will have some for you too, later on. A client can come to you and ask if they should upgrade to PHP 8.4 or rewrite the project in Golang, which is also the worst decision they can ever make. But If such concerns arise in your clients, you know where they stem from, and that’s extremely valuable information for us.“&lt;/p&gt;

&lt;h2&gt;
  
  
  Final questions
&lt;/h2&gt;

&lt;p&gt;Wrapping up the interview, I wanted to know Roman`s favourite sources for keeping up to date with PHP news: “The research I do for my newsletter &lt;a href="" rel="nofollow"&gt;PHP Annotated&lt;/a&gt; is usually more than enough. I get the information by following and directly talking to cool people in the community on Twitter (now X), Bluesky, or Mastodon. This newsletter should cover all the news of the ecosystem,” said Roman.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As Roman Pronskiy aptly puts it, “PHP is as good as immortal.” As PHP approaches its 30th anniversary, the language continues to evolve at a never-before-seen pace — now under the stewardship of the PHP Foundation. With a focus on innovation, security, and community engagement, the future of PHP shines brighter than ever. &lt;/p&gt;

</description>
      <category>php</category>
    </item>
    <item>
      <title>Queueing in multi-tenant SaaS systems. How to ensure its fairness</title>
      <dc:creator>Michał Kurzeja</dc:creator>
      <pubDate>Wed, 13 Sep 2023 08:27:02 +0000</pubDate>
      <link>https://forem.com/accesto/queueing-in-multi-tenant-saas-systems-how-to-ensure-its-fairness-509l</link>
      <guid>https://forem.com/accesto/queueing-in-multi-tenant-saas-systems-how-to-ensure-its-fairness-509l</guid>
      <description>&lt;p&gt;Queueing in multi-tenant SaaS systems is often introduced to improve the overall platform stability, improve the user experience and scale the system. When you browse the internet or ask some developers, it will often be mentioned as an easy-to-implement solution. Well, it is not always that way, and there are sometimes some pitfalls that you can encounter while doing this step.&lt;/p&gt;

&lt;h2&gt;
  
  
  What multi-tenancy and single-tenancy actually mean
&lt;/h2&gt;

&lt;p&gt;Let's first discuss some basics, so we have a common understanding of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SaaS (Software as a Service) — this is probably self-explanatory as selling software in this form is pretty common now. But quoting Wikipedia:

&lt;ul&gt;
&lt;li&gt;SaaS is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. SaaS is also known as on-demand software, web-based software, or web-hosted software&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Multi-tenant — this means that the SaaS software you offer is serving multiple tenants/multiple customers using one instance. It does not matter if your instance is a server, multiple servers or serverless, the important thing is you have a shared infrastructure. Most SaaS companies are running using this model as it is way easier to manage and lowers the overall maintenance costs. In fact, only very specialized services, with high charges make sense not to be multi-tenant.&lt;/li&gt;
&lt;li&gt;Single-tenant — opposite to multi-tenant. This means that in order to onboard a new customer, your tech team needs to set up a new instance of the app. Each tenant can also have a different version of the app (although maintaining too many differences often gets too expensive and hard to manage). This means, that in a single-tenancy system, each new customer runs a separate software instance. Each tenant can have multiple users that share the same instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Single-tenant vs multi-tenant SaaS architecture
&lt;/h2&gt;

&lt;p&gt;The decision between multi-tenancy and single-tenant architecture is out of the scope of this article, but the basic rule of thumb is to choose what is important for you — the ease of management of multiple customers — probably hundreds and thousands, of being able to adjust every single tenant by adding custom code changes. Another decision driver for a single-tenant architecture would be a requirement of separate computing resources, data isolation, data security, and some specific legal requirements. In general, something we could summarize as tenant isolation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the problem with queueing in multi-tenant systems?
&lt;/h2&gt;

&lt;p&gt;Let us imagine a system where we have a long-running process that the clients execute. It can be anything — from processing a file (import a CSV, video format conversion), or anything else that makes sense — like crawling a page etc.&lt;/p&gt;

&lt;p&gt;Now in such systems, it could happen, that one customer executes so many tasks that are queued, that his "tasks" pile up and form a long waiting queue to be handled.&lt;/p&gt;

&lt;p&gt;If Client 1 executes 1000 such tasks, all other customers (2, 3 etc.) will have to wait until this one gets his 1000 tasks done. This happens because queues are by default FIFO — first in, first out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3kPK_yEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61eeefi3d9oj0476sowq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3kPK_yEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61eeefi3d9oj0476sowq.png" alt="Default FIFO queue" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, Client 2 will wait for ages to get his one task done, will get quickly angry and can potentially cancel the subscription. Why would he wait an hour or two to get one simple conversion finished? &lt;/p&gt;

&lt;p&gt;So we have one customer affecting the user experience of many customers, by consuming all the computing resources available.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is fair processing in multi-tenant architecture
&lt;/h2&gt;

&lt;p&gt;Before we jump into a bit more technical discussions let's quickly consider what we think is fair.&lt;/p&gt;

&lt;p&gt;The quick answer is that one client causing a big load, should not cause harm to other clients. So we should process Client 2 and Client 3 from the image, and then continue with Client 1 tasks.&lt;/p&gt;

&lt;p&gt;But to be honest, in lots of SaaS, we have different tiers of clients. Not all clients are equal. A client that has the lowest tier should probably have a bit less priority than a client on an enterprise Tier. This could mean, that f.e. our business decision is to handle clients in the top Tier four times faster than in the lowest one. This does not mean we process all tasks of the top tier first, we just process them a bit faster if the queue is full.&lt;/p&gt;

&lt;p&gt;This might be considered unfair, but I would like to have my target multi-tenant architecture handle such cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-tenant SaaS architecture — things to consider
&lt;/h2&gt;

&lt;p&gt;In order to properly implement a multi-tenant architecture we need to consider the following related problems:&lt;/p&gt;

&lt;h3&gt;
  
  
  Single-tenant vs multi-tenant — data isolation
&lt;/h3&gt;

&lt;p&gt;When you switch from a single-tenant to a multi-tenant approach you need to consider how you partition your data. Your application now stores data from multiple tenants/multiple customers in a single software instance, but the tenant's data should not be shared between them!&lt;/p&gt;

&lt;p&gt;In short, the three multi-tenancy models for data storage are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Silo - when each tenant has a separate instance of the storage, f.e. a separate database server.&lt;/li&gt;
&lt;li&gt;Bridge - when each client data is stored on the same database server, but has a separate schema to store data.&lt;/li&gt;
&lt;li&gt;Pool - when all clients share the same database (including schema), but the tables have columns to inform which tenant data it represents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IFWYKInU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8grdb4u8qjzmdd5lhpb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IFWYKInU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8grdb4u8qjzmdd5lhpb7.png" alt="Three different multi-tenancy models for data storage" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the Silo approach is quite close to running a single-tenant architecture. In fact, for the db storage you need to create a single instance for each new customer. This can be automated and is a bit easier to manage than running a separate instance of the software per customer, but it still requires quite a lot of work and raises the maintenance costs. When you need to go the Silo way, it's good to consult with cloud service providers — they usually have articles explaining how to implement dedicated instances for storage using their cloud services.&lt;/p&gt;

&lt;p&gt;This is quite an interesting discussion to have when planning your multi-tenant architecture. You can read more about it f.e. in &lt;a href="https://d0.awsstatic.com/whitepapers/Multi_Tenant_SaaS_Storage_Strategies.pdf" rel="nofollow"&gt; this Amazon whitepaper&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Noisy neighbour
&lt;/h3&gt;

&lt;p&gt;Because multi-tenant SaaS software clients share the same hardware resources, it can happen that the activity of one of the tenants will have a negative impact on other tenants. I already touched the queueing part of it, but the same issue applies to other resources. F.e. a client executing many heavy actions that are not queued can cause a partial system outage that also hits other users. In this case, even when a user is not causing a big load on the system, he will encounter slow response times, errors etc. It happens not only when all tenants use a single database instance, but even if they only use the same software instance, as the CPU resources are shared.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yJ3e3TBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgxlu3r14oacqnpc7dy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yJ3e3TBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgxlu3r14oacqnpc7dy2.png" alt="Total system capacity multi-tenant problem" width="423" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Uzd4IhLW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dszdpr28yqzv6fhbpp9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Uzd4IhLW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dszdpr28yqzv6fhbpp9a.png" alt="Insuficient CPU resources for the SaaS queue" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure with 3 tenants, each consuming less the maximum throughput of the solution. In total, the three tenants consume the complete system resources.&lt;/p&gt;

&lt;p&gt;You can read more about it in &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/antipatterns/noisy-neighbor/noisy-neighbor" rel="nofollow"&gt;this Microsoft Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to fix multi-tenant queueing?
&lt;/h2&gt;

&lt;p&gt;While researching the web I found a couple of different solutions, let us quickly go through them and discuss the pros and cons.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Add more workers
&lt;/h3&gt;

&lt;p&gt;This can definitely be considered as a first aid if the problem hits you. If you have a lot of consumers there might be a need for faster processing = lower waiting times. But let us be honest — this won't solve the problem. One client can still block the processing for others, and there is usually a limit of consumers that you can execute. Next to that, if the processing is using some external systems/APIs, you can easily hit the rate limits.&lt;/p&gt;

&lt;p&gt;It is worth mentioning, that even when you use cloud computing with virtual machines that you can scale easily, there is usually a hard limit enforced by the cloud platform you use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CzgA5eRW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cam0ui0nq0x4hl97ytq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CzgA5eRW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cam0ui0nq0x4hl97ytq9.png" alt="More queue workers" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Request throttling
&lt;/h3&gt;

&lt;p&gt;Did I just mention an external system could have a rate limit? Yup. When we hear rate limits, we usually tend to think about APIs. But we could introduce something similar in our app. Not always, but in some business flows, you can show an alert to the user, that he added too many tasks, and because of that — needs to wait a bit, or... just upgrade to a higher tier ;)&lt;/p&gt;

&lt;p&gt;This does not solve the queue issue itself but could help limit the problem of a noisy tenant. You can set limits for tasks added within a minute, hour, day etc. Not perfect, but I think it's worth considering.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.1 Request throttling with priority
&lt;/h4&gt;

&lt;p&gt;Priority — connected with the tier the tenant is on, can be actually added to many of the described solutions. If you connect tenant priority with a rate limit, it can lead to some interesting results. Most queues support the priority of a message, and that will allow to process enterprise customers a bit faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Sharding
&lt;/h3&gt;

&lt;p&gt;In this case, you can just split the tasks into multiple queues (each queue can serve multiple tenants). A simple solution could be based on the client number, like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Tos_Aj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6c9pmlcia7yuupyvlnci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Tos_Aj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6c9pmlcia7yuupyvlnci.png" alt="Queue sharding for SaaS" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example, I split the clients into 3 different queues. In this case, when Client 1 starts to process too many tasks, it will "only" affect 33,(3)% of the other tenants. Does not fix the problem, but definitely makes it a bit less painful (at least for some users).&lt;/p&gt;

&lt;h4&gt;
  
  
  3.1 Sharding, but better
&lt;/h4&gt;

&lt;p&gt;This is something I did not see in any articles, but I've learned from one of our projects. You can actually come up with better ideas for sharding, than the client id.&lt;/p&gt;

&lt;p&gt;As an example, we could have a default queue for processing tasks, but we can also add a queue for our "enterprise" tenants. Next to that, we can come up with queues for tenants that are known for high load and just process them next to the default queue.&lt;/p&gt;

&lt;p&gt;In the project I worked for, there was a special "spike" queue, that was activated by a rate-limiting mechanism. If the system detected that a tenant was queueing too many tasks, he was moved to a "spike" queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BSAEn1GX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aiga79wfhew1w3ragp1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BSAEn1GX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aiga79wfhew1w3ragp1x.png" alt="Our approach to queue sharding" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Multiple instances - per tenant queue
&lt;/h3&gt;

&lt;p&gt;Easy, elegant, powerful and not always possible. Just create an automation (using Cloudformation or Terraform), that sets up a queue for each new tenant. You can consider this to be the &lt;code&gt;Silo&lt;/code&gt; approach described in the partitioning SaaS part above.&lt;/p&gt;

&lt;p&gt;This is probably the perfect solution if you do not have too many tenants, and there won't be too many new each month/day.&lt;/p&gt;

&lt;p&gt;In our case, we had thousands of tenants register every week, so a per-tenant queue was not an option. It would be a nightmare to manage, but also we would hit the limit of SQS queues within one week.&lt;/p&gt;

&lt;h4&gt;
  
  
  4.1 Processing this amount of queues
&lt;/h4&gt;

&lt;p&gt;Another question is how to handle that many queues. Well, there are at least two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If the customers pay enough, you could run a consumer for each of them. Not perfect, but I think it would fit the Silo approach quite well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You could implement a &lt;code&gt;slim&lt;/code&gt; consumer that iterates through all the queues, and fetches max one new task from each tenant queue, then forwards the task to one shared queue. The "output" queue is then connected to your consumers - shared across all tenants. &lt;a href="https://medium.com/thron-tech/multi-tenancy-and-fairness-in-the-context-of-microservices-sharded-queues-e32ee89723fc" rel="nofollow"&gt; This approach is described by Simone Carriero in his article&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SN9PM2HV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ofttwbkp9p2h1pa5p26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SN9PM2HV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ofttwbkp9p2h1pa5p26.png" alt="Processing queues" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Switch what is queued
&lt;/h3&gt;

&lt;p&gt;This is our case, and before I describe it I would like to underline, that it fits our needs, and might be considered a bit controversial. Yet, it has worked perfectly for over two years now ;)&lt;/p&gt;

&lt;p&gt;A bit of background on what we had to handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have quite a lot of new tenants every week;&lt;/li&gt;
&lt;li&gt;Each new tenant starts by importing data from an external system;&lt;/li&gt;
&lt;li&gt;An import might be just a few elements, but might also be thousands of them;&lt;/li&gt;
&lt;li&gt;Data is imported from an external system that has rate limits — both for us and each of our tenants;&lt;/li&gt;
&lt;li&gt;So when we hit the rate limit for Tenant A, we can usually still process Tenant B.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A "Default" approach to this would be to queue each element, as each of them needs to be fetched from the external API, and then processed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2b4v6roo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhv7laah4t77608r0dcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2b4v6roo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhv7laah4t77608r0dcj.png" alt="Standard queueing approach" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is obviously a terrible idea, as one import of 10000 elements will be processed for ~1 hour (due to rate limits), and tenants that import 10 elements will have to wait, although they could be served within 3 seconds.&lt;/p&gt;

&lt;p&gt;So we decided to rethink what exactly we queue, and instead of queueing a certain element, we just queue the fact that tenant X import needs to be processed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kUv0mYYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noy7gxghp6qf38szye0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kUv0mYYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noy7gxghp6qf38szye0l.png" alt="Switch what is queued" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next to that, we maintain a separate list of elements to be imported into the database.&lt;/p&gt;

&lt;p&gt;A consumer receives a task to process import no. 1, fetches the next element for it, and after it finishes, it reschedules the task to the queue:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qBPbETpp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fq9jrcgpc1fywuuuq3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qBPbETpp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fq9jrcgpc1fywuuuq3b.png" alt="Another way of queueing for SaaS" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Rate limit adjustments
&lt;/h4&gt;

&lt;p&gt;As mentioned, the system we are connected to has a rate limit enforcement both on our app, but also on the tenant level. Our app rate limits are quite big, and in 99% of cases, we hit the tenant limit. So it makes sense to slow down importing one tenant, in order to process others.&lt;/p&gt;

&lt;p&gt;In the case of our solution, that was pretty easy to implement. First, we had to fetch the rate limit information from the external service. That actually already there in each response header. We have exact info on how many requests are left in a specified timeframe.&lt;/p&gt;

&lt;p&gt;Based on that, we can calculate a delay — the closer we get to the limit, the bigger the delay gets. As the delay grows, the rate limit "recovers", and we can lower the delay.&lt;/p&gt;

&lt;p&gt;Next, we pass the delay to the queueing solution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--59FvYKvb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t2vnjetea1v3ijntkhik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--59FvYKvb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t2vnjetea1v3ijntkhik.png" alt="Rate limit adjustments in SaaS queues" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the queue handles the delay and provides us only with tasks that we can run without hitting the limits.&lt;/p&gt;

&lt;h4&gt;
  
  
  Handling different tiers
&lt;/h4&gt;

&lt;p&gt;As mentioned in the introduction, in most SaaS solutions there are different tiers of users, and we might need to process some users faster. Like a big enterprise account etc.&lt;/p&gt;

&lt;p&gt;In our system, each tier has an assigned "multiplier" that tells us how many import tasks we should schedule for the import task. Based on that, we can manage the pace/velocity at which we import the data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GPdIPkDQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9g9qf2dhcpyfr0xtdmut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GPdIPkDQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9g9qf2dhcpyfr0xtdmut.png" alt="Handling different tiers" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the perfect solution for your SaaS?
&lt;/h2&gt;

&lt;p&gt;There is no one-size-fits-all solution for multi-tenancy queueing, and I doubt our approach will be the best one for all of you. I just wanted to light a spark, and make you think about a couple of things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Although we (developers) tend to say queueing is easy, it is not. It is easy to implement, but it comes with quite a lot of problems you can hit, fair processing and data partitioning are two of them.&lt;/li&gt;
&lt;li&gt;You might read about some battle-proven solutions described by well-known companies, but I think it is still beneficial to sit down and re-think the approach for your multi-tenant app. You might come up with something similar to what we did, that works a lot better in your case.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Multi-tenancy in SaaS is usually not easy, but to be honest I think in 95% of cases it is worth investing the time, as it makes maintenance costs way lower. So unless you are in an MVP phase, you should make the investment.&lt;/p&gt;

&lt;p&gt;If you have any other ideas for solving queueing in multi-tenant architecture — &lt;a href="https://accesto.com/contact/"&gt; let me know! &lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
    </item>
    <item>
      <title>PHP Monolith to Microservices Using the Strangler Pattern</title>
      <dc:creator>Michał Kurzeja</dc:creator>
      <pubDate>Thu, 17 Nov 2022 06:33:43 +0000</pubDate>
      <link>https://forem.com/mkurzeja/php-monolith-to-microservices-using-the-strangler-pattern-14g4</link>
      <guid>https://forem.com/mkurzeja/php-monolith-to-microservices-using-the-strangler-pattern-14g4</guid>
      <description>&lt;p&gt;Dealing with the legacy system can be a real pain, both for developers and for business owners. It comes with a handful of consequences: longer time-to-market, difficult-to-debug issues, performance problems, and higher development effort - just to name a couple of them.&lt;/p&gt;

&lt;p&gt;Sticking to a monolithic architecture might also be one of the "legacy" issues to have. It usually hits bigger companies, where the development is split between multiple teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Legacy system - how it looks now
&lt;/h2&gt;

&lt;p&gt;Let's quickly imagine a legacy monolithic application that was built when the company was smaller, and then extended rapidly as the company grew. Now developed by four teams that need to share the same codebase. As always (in legacy systems), there is spaghetti code, and the modules are tightly coupled. The overall software architecture is rather poor, and the amount of &lt;a href="https://accesto.com/blog/technical-debt-the-silent-villain-of-web-development/" rel="noopener noreferrer"&gt;technical debt&lt;/a&gt; is high. The codebase is missing unit tests, and in general, far from being easy to maintain and extend.&lt;/p&gt;

&lt;p&gt;In such situations, a change introduced by one of the teams might break the code maintained by another team.&lt;/p&gt;

&lt;p&gt;When, for example, team III pushes some code that breaks things for team I, they actually block the entire project, as no other team will be able to release, unless the changes are reverted or the bug is fixed. What's more, quite often a bug in the team's own part of the project might block all other teams.&lt;/p&gt;

&lt;p&gt;The legacy application is both build and released as a whole, so an issue in any part blocks the whole release process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe1cmda9x9b8bq6awqep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe1cmda9x9b8bq6awqep.png" title="Monolithic application" alt="Monolithic application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've seen projects with 1-month feature freeze periods, just to be able to release anything to production. And even having the feature freeze, they had issues from time to time!&lt;/p&gt;

&lt;p&gt;So in case your project is in such a condition or is moving towards this, you might want to split it into microservices, and let each team maintain its own development process, releases, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  New system - how it could look like
&lt;/h2&gt;

&lt;p&gt;Let's step back for a moment from the old system, and focus on how a modern application could be organized. In projects that are split between multiple development teams, it might be beneficial to use the microservices architecture and benefit from each team being able to deliver business value faster in their own separate process. When designing the new architecture, each team could also ensure to lower the code complexity and make sure there is no performance bottleneck. The team is now not blocked by another team and can follow its own quality rules.&lt;/p&gt;

&lt;p&gt;Extracting one part of the system already improves the process, and can make a huge difference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwrt5el18n0hthr0hsmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwrt5el18n0hthr0hsmx.png" title="Code Refactoring" alt="Code Refactoring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looks nice, doesn't it? The team is able to release new features on its own. You can now extract the next component and make it independent!&lt;/p&gt;

&lt;p&gt;Now that we know what the end goal looks like, let's ask ourselves an important question.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait, why can't we rewrite this project from scratch?
&lt;/h2&gt;

&lt;p&gt;The short answer is: it's pretty risky. All IT projects that are not split into small pieces are risky by definition. That's one of the reasons why Agile gained so much traction. By splitting your work into short iterations, you minimize risk and allow yourself to act quickly if something goes wrong.&lt;/p&gt;

&lt;p&gt;Another reason is that in order to build a new system with good quality, you need to understand the business and the existing system very well. For large codebases, this gets tricky, cumbersome, and often leads to the bad design of the new system.&lt;/p&gt;

&lt;p&gt;One more problem is that you need to wait a long time until the new system can be used. It often takes months or even years, during which your old application should be maintained. Such a combination is hard to manage: if you focus too much on the new system, the legacy application won't get the required improvements; on the other hand, if you focus too much on the legacy system, the new services will take ages to finish.&lt;/p&gt;

&lt;p&gt;I've written a post on this topic some time ago, if you would like to learn more - check it out: &lt;a href="https://accesto.com/blog/5-reasons-why-rewriting-an-applications-from-scratch-is-a-bad-idea/" rel="noopener noreferrer"&gt;5 reasons why rewriting an application from scratch is a bad idea&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ok, so how do I manage the migration process
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Strangler Pattern
&lt;/h3&gt;

&lt;p&gt;If you are not familiar with the Strangler Pattern, I suggest reading our articles on it, f.e.: &lt;a href="https://accesto.com/blog/strangler-pattern-approach-to-migrating-applications-pros-and-cons/" rel="noopener noreferrer"&gt;Strangler pattern approach to migrating applications - pros and cons&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But in short, The Strangler Pattern is a well-known design pattern to transform an old, legacy system, into a new one (in our current approach microservices), using small, incremental steps.&lt;/p&gt;

&lt;p&gt;You select one set of features and create an implementation as a new service. So for a moment, you have two separate versions running.&lt;/p&gt;

&lt;p&gt;When one part of the system is rewritten, you strangle the old implementation by switching all traffic to the new system. The traffic is routed to the correct implementation using a thing strangler facade - in most cases a simple &lt;a href="https://accesto.com/blog/docker-reverse-proxy-using-traefik/" rel="noopener noreferrer"&gt;reverse proxy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can rewrite your system part by part, eventually moving all features to the new application, completely replacing the entire application. You can even add new functionality at the same time!&lt;/p&gt;

&lt;p&gt;Thanks to that, any upcoming changes to this (rewritten) part of the system will be done in the new, microservices-based code base. Because your new microservices are decoupled from the legacy system, you can follow Test-Driven Development, use newer tools, have a separate build/release pipeline etc. There are no major boundaries that would prevent you to apply the architecture and/or solutions you need.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5k1bsl1zq79b11tfqr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5k1bsl1zq79b11tfqr2.png" title="Microservices Application" alt="Microservices Application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Refactor candidates
&lt;/h2&gt;

&lt;p&gt;The important question is: how to select which component to rewrite first?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you are new to the Strangler Pattern, please select a simple component, preferably not tightly coupled with the rest of your legacy application. Rewrite, make mistakes, and learn how to apply this approach. It will get more complex with bigger components, so gain some confidence first.&lt;/li&gt;
&lt;li&gt;Favor components that are well known, easy to understand, or have a good test suite. Again, this will help to gain some confidence.&lt;/li&gt;
&lt;li&gt;If you are confident with the Strangler Pattern approach, consider the following characteristics:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;how often a component needs to change - if it changes frequently, you will benefit more from moving it out of the legacy monolith;&lt;/li&gt;
&lt;li&gt;if there are performance problems with one of the components - moving it out will allow you to scale it separately;&lt;/li&gt;
&lt;li&gt;generic feelings of the team, about a component - they might struggle with some parts more and would like to get rid of some issues first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is no reason to rewrite a component that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;works well;&lt;/li&gt;
&lt;li&gt;has a very bad code base;&lt;/li&gt;
&lt;li&gt;does not change at all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the risk of rewriting is high, it's hard to do it, and the ROI is almost 0.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking on the strangler path
&lt;/h2&gt;

&lt;p&gt;The path from a Monolithic, legacy app to well-written and well-thought microservices is long and bumpy. Yes, it will get hard sometimes, but believe me - it gets way easier if you split it into smaller chunks.&lt;/p&gt;

&lt;p&gt;Divide and conquer. Smaller parts mean less risk, easier development and better control of the whole process.&lt;/p&gt;

&lt;p&gt;Another nice thing is that if you choose the components wisely, you will achieve great results in a short period of time. By extracting one of the components, you can release it frequently, without being blocked by other teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Be careful!
&lt;/h2&gt;

&lt;p&gt;Now, as you can see, a migration from a monolithic architecture to microservices using the strangler pattern is pretty easy and straightforward. I'd dare to say it is too easy! Why? Because some projects take this path without considering the hidden costs and potential issues. Microservices are not a silver bullet, they won't work in all projects! I often see small companies with only one dev team implementing microservices. This is in most cases bad, and should not be done. Thankfully, the strangler pattern method can be applied also to other, simpler architectures, so no matter what architecture works for your system - the strangler pattern might help to get there!&lt;/p&gt;

&lt;p&gt;Let me know if you have any questions, I'm always happy to discuss similar cases!&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>microservices</category>
      <category>codequality</category>
    </item>
    <item>
      <title>Strangler pattern approach to migrating applications - pros and cons</title>
      <dc:creator>Michał Kurzeja</dc:creator>
      <pubDate>Mon, 11 Jul 2022 07:10:27 +0000</pubDate>
      <link>https://forem.com/accesto/strangler-pattern-approach-to-migrating-applications-pros-and-cons-40fk</link>
      <guid>https://forem.com/accesto/strangler-pattern-approach-to-migrating-applications-pros-and-cons-40fk</guid>
      <description>&lt;p&gt;The strangler pattern is a common approach to rewriting, modernizing and migrating existing (legacy) software to a new approach/solution/implementation. We already covered some topics related with it in the past. Mentioned it as one of the &lt;a href="https://accesto.com/blog/handle-technical-debt-in-legacy-application-4-possible-scenarios/"&gt;alternatives for system rewriting&lt;/a&gt;, and described a &lt;a href="https://dev.tocase-study%20migration"&gt;case-study migration&lt;/a&gt; in one of our projects. But it seems, that strangler pattern is gaining more and more visibility in recent years, and this is not a surprise for us.&lt;/p&gt;

&lt;p&gt;Strangler pattern comes with a good set of pros that it brings to a project. Also, the toolset that are now available to developers allow to easily perform this kind of migrations on living systems. Let’s take a deeper look at some basics, and the pros and cons that arise from them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strangler pattern approach
&lt;/h2&gt;

&lt;p&gt;Let’s focus on the very basics first. What does the strangler pattern mean?&lt;/p&gt;

&lt;p&gt;Well, in this pattern, we start with a legacy system that we need to rewrite, modernize or migrate to something newer. Quite often such legacy systems use some outdated technologies, frameworks, libraries, and their overall complexity makes it very hard or even impossible to refactor the code bit by bit.&lt;/p&gt;

&lt;p&gt;Our first step with the strangler pattern is to put an intermediary facade in front of the old system. In that case, all requests need to go through the facade, before they reach the legacy system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ia7PgPvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdp5c5u4muk803wvw0lp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ia7PgPvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdp5c5u4muk803wvw0lp.png" alt="Introducing a strangler pattern facade" width="880" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a next step, we decide what part of the system (what service) we would like to migrate. Let’s assume we work on an e-commerce system. We could decide to migrate the home page first. In that case, we write this part of the system from scratch, using a greenfield approach. You can choose the tech stack, the architecture - everything. There are no major constrains. In our case, we usually decide to use Symfony, and an event-driven architecture.&lt;/p&gt;

&lt;p&gt;The new system is built next to the old system:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z6J4KjMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hoqyb4j84nmvrz8grufm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z6J4KjMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hoqyb4j84nmvrz8grufm.png" alt="Introducing a new system" width="880" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the team is finished with the development, and the new code is tested, we can adjust the facade and move the traffic for home page from our old system to our new service.&lt;/p&gt;

&lt;p&gt;Following this approach, we can migrate the legacy application part by part. There is no need to work with the legacy code, all new features can be implemented in the new system. You maintain the old application, and at the same time add functionality to the new system. By following the strangler pattern approach, you can pay back the technical debt and add new features at the same time. Eventually, the new architecture will be completely replacing old code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros and cons of strangler pattern implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Minimize risk of migration by splitting the migration process into smaller, separated chunks of work. Divide and conquer!&lt;/li&gt;
&lt;li&gt;Allows to quickly deliver business value and achieve visible results - rewriting a small subset of features takes significantly less than rewriting the whole system.&lt;/li&gt;
&lt;li&gt;Has no major constraints regarding the software architecture and technology behind the new system.&lt;/li&gt;
&lt;li&gt;In most cases, the development effort required to rewrite the entire application is lower than in other approaches&lt;/li&gt;
&lt;li&gt;Easy to implement and understand. The code complexity is rather low.&lt;/li&gt;
&lt;li&gt;Works well even when refactoring a complex system.&lt;/li&gt;
&lt;li&gt;Allows to implement microservices architecture and split the work between multiple development teams. But as there are no constraints on the software architecture, you can also apply it to a monolithic application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Requires a lot of attention to the boundaries between old and new parts. Connection points, data exchange, etc.&lt;/li&gt;
&lt;li&gt;As a result of the above, might require to write lots of Adapters.&lt;/li&gt;
&lt;li&gt;It is quite easy to break some best practices and end with a dependency hell.&lt;/li&gt;
&lt;li&gt;Sometimes rollback scenarios might be hard to implement, and you obviously should have them.&lt;/li&gt;
&lt;li&gt;When implemented wrongly, the parts might become tightly coupled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As always, using a software pattern comes with some consequences. You literally exchange one problem for another one. There are no silver bullets, but by knowing different approaches you can decide on what problems mean less risk for you.&lt;/p&gt;

&lt;p&gt;In case of strangler pattern, you exchange the huge risk of rewriting a legacy monolithic application into smaller, less risky changes. You iterate faster, in a more agile way. There are obviously some consequences that arise, and you should be aware of, but smaller failures are always easier to fix, and that's what the Strangler fig pattern offers you.&lt;/p&gt;

</description>
      <category>architecture</category>
    </item>
    <item>
      <title>Docker networks explained - part 2: docker-compose, microservices, chaos monkey</title>
      <dc:creator>Michał Kurzeja</dc:creator>
      <pubDate>Tue, 15 Mar 2022 08:12:56 +0000</pubDate>
      <link>https://forem.com/accesto/docker-networks-explained-part-2-docker-compose-microservices-chaos-monkey-b02</link>
      <guid>https://forem.com/accesto/docker-networks-explained-part-2-docker-compose-microservices-chaos-monkey-b02</guid>
      <description>&lt;p&gt;In my previous article on &lt;a href="https://accesto.com/blog/docker-networks-explained-part-1/" rel="noopener noreferrer"&gt;docker networks&lt;/a&gt;, I've touched the basics of network management using the docker CLI. But in real life, you probably won't work this way, and you will have all the containers needed orchestrated by a docker-compose config.&lt;/p&gt;

&lt;p&gt;This is where this article comes into play - let's see how to use networks in real life. Covering the basics of network management with docker-compose, how to use networks for multi-repository/multi-docker-compose projects, microservices and also how to make use of them to test different issue scenarios like limited network throughput, lost packets etc.&lt;/p&gt;

&lt;p&gt;Let's start with some very basic stuff, if you are already familiar with docker-compose you might want to skip some sections below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker-compose basics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Exposing ports with docker-compose
&lt;/h3&gt;

&lt;p&gt;The first thing you might want to do is to simply expose a port:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  phpmyadmin:
    image: phpmyadmin
    ports:
      - 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For those of you new to Docker - expose means to open a port to the outside world. You can limit it by IP, but by default this will mean everyone can access it. The port is exposed on your network interface, not the container one. In the above example, you can access port 80 of PhpMyAdmin container on your port 8080 (localhost:8080).&lt;/p&gt;

&lt;p&gt;As you can see, it's pretty simple, you just pass the ports to be exposed, following the same idea as in docker CLI, so &lt;code&gt;localPort:containerPort&lt;/code&gt;. Adding the listening interface for local port is obviously also possible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  phpmyadmin:
    image: phpmyadmin
    ports:
      - 127.0.0.1:8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This might be handy when you do not want to expose some services to the outside.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting containers within a docker-compose file
&lt;/h3&gt;

&lt;p&gt;As mentioned in the &lt;a href="https://accesto.com/blog/docker-networks-explained-part-1/" rel="noopener noreferrer"&gt;docker networks&lt;/a&gt; post, &lt;strong&gt;docker-compose creates a network by default&lt;/strong&gt;. You can easily check this by creating a very simple docker-compose config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
  phpmyadmin:
    image: phpmyadmin
    restart: always
    ports:
      - 8080:80
    environment:
      - PMA_HOSTS=db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run it and see what happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
Creating network &lt;span class="s2"&gt;"myexampleproject_default"&lt;/span&gt; with the default driver
Pulling db &lt;span class="o"&gt;(&lt;/span&gt;mariadb:10.3&lt;span class="o"&gt;)&lt;/span&gt;...
&lt;span class="o"&gt;(&lt;/span&gt;...&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And as you might have noticed in the first line, a default network called &lt;code&gt;myexampleproject_default&lt;/code&gt; is created for this project. It is also visible in the docker CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network &lt;span class="nb"&gt;ls
&lt;/span&gt;NETWORK ID     NAME                      DRIVER    SCOPE
a9979ee462fb   bridge                    bridge    &lt;span class="nb"&gt;local
&lt;/span&gt;d8b7eab3d297   myexampleproject_default  bridge    &lt;span class="nb"&gt;local
&lt;/span&gt;17c76d995120   host                      host      &lt;span class="nb"&gt;local
&lt;/span&gt;8224bb92dd9b   none                      null      &lt;span class="nb"&gt;local&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;All containers from this docker-compose.yaml are connected with this network&lt;/strong&gt;. This means, they can easily talk to each other and hosts are resolved by name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nb"&gt;exec &lt;/span&gt;phpmyadmin bash
root@b362dbe238ac:/var/www/html# getent hosts db
172.18.0.3      db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker-compose networks and microservice oriented architecture
&lt;/h2&gt;

&lt;p&gt;When writing a microservice oriented project, it's very handy to be able to simulate at least a part of the production approach in the development environment. This includes separating and connecting groups of containers, but also testing network latency issues, connectivity problems etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dividing containers into separate networks
&lt;/h3&gt;

&lt;p&gt;But what if you DO NOT want the containers to be able to talk to each other? Maybe you are writing a system where one part should be hidden from the other one?&lt;br&gt;
In practice, such parts of the system are separated using AWS VPC or similar mechanisms, but it would be nice to test this on a development machine, right?&lt;/p&gt;

&lt;p&gt;No problem, let's take a look at this config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  service1-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
  service1-web:
    image: nginxdemos/hello
    ports:
      - 80:80
  service2-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
  service2-web:
    image: nginxdemos/hello
    ports:
      - 81:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see we have two separate services, each consists of a web and db container. We would like to make db accessible only from their web service, so service2-web is not able to access service1-db directly.&lt;/p&gt;

&lt;p&gt;Let's check how it works now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nb"&gt;exec &lt;/span&gt;service1-web ash
/ &lt;span class="c"&gt;# getent hosts service1-db&lt;/span&gt;
172.19.0.2        service1-db  service1-db
/ &lt;span class="c"&gt;# getent hosts service2-db&lt;/span&gt;
172.19.0.5        service2-db  service2-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unfortunately, the services are not separated in the way we would like them to be.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw1gv4cg8flst3cfcu6c.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw1gv4cg8flst3cfcu6c.jpeg" alt="All services connected"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No worries, this can be achieved adding very simple changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  service1-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
    networks: 
      - service1
  service1-web:
    image: nginxdemos/hello
    ports:
      - 80:80
    networks: 
      - service1
      - web
  service2-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
    networks: 
      - service2
  service2-web:
    image: nginxdemos/hello
    ports:
      - 81:80
    networks: 
      - service2
      - web

networks:
  service1:
  service2:
  web:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have introduced three different networks (lines 30-31) - one for each service and a shared one for web services. Why do we need the third one? It is required in order to allow communication between service1-web and service2-web. We also added network configuration for each of the services (lines 7-8, 13-15, 20-21, 26-28).&lt;/p&gt;

&lt;p&gt;Let's check how service1-web resolves the names now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dco &lt;span class="nb"&gt;exec &lt;/span&gt;service1-web ash
/ &lt;span class="c"&gt;# getent hosts service2-web&lt;/span&gt;
172.22.0.2        service2-web  service2-web
/ &lt;span class="c"&gt;# getent hosts service2-db&lt;/span&gt;
/ &lt;span class="c"&gt;# getent hosts service1-db&lt;/span&gt;
172.20.0.3        service1-db  service1-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx14oyhbrtvdekzqwyecy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx14oyhbrtvdekzqwyecy.jpeg" alt="Split networks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we can quite easily achieve a separation between containers by introducing networks, and connecting together only selected containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting containers between multiple docker-compose files
&lt;/h3&gt;

&lt;p&gt;Quite often, such projects as the above one are split between git repositories, or at least between docker-compose.yaml files. So a developer can launch each of the services separately. How can we connect such services? Let's take a look.&lt;/p&gt;

&lt;p&gt;Let's assume we decided to split the previously used project into two separate repositories. One for service1, and a second one for service2. This would mean we have two docker-compose.yaml files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#service1/docker-compose.yaml
version: '3.6'
services:
  service1-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
  service1-web:
    image: nginxdemos/hello
    ports:
      - 80:80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#service2/docker-compose.yaml
version: '3.6'
services:
  service2-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
  service2-web:
    image: nginxdemos/hello
    ports:
      - 81:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we launch both configurations, service1-web and service2-web won't be able to communicate with each other, as they will be added to two different networks: each docker-compose.yaml file creates its own network by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
Creating network &lt;span class="s2"&gt;"service1_default"&lt;/span&gt; with the default driver
Creating service1_service1-web_1 ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating service1_service1-db_1  ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's start with adding back the network configuration for service1 with a small change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  service1-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
    networks: 
      - service1
  service1-web:
    image: nginxdemos/hello
    ports:
      - 80:80
    networks: 
      - service1
      - web

networks:
  service1:
  web:
    name: shared-web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We added a bit of configuration in line 20. In this case I wanted to give my web network a fixed name. By default, the name consists of &lt;code&gt;PROJECTNAME_NETWORKNAME&lt;/code&gt;, and project name by default is the directory name. The directory we are in might have different names for different developers, so the safe option to go is to enforce this name.&lt;/p&gt;

&lt;p&gt;Now, for service2, we need to act a bit differently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  service2-db:
    image: mariadb:10.3
    environment:
      MYSQL_ROOT_PASSWORD: secret
    networks: 
      - service2
  service2-web:
    image: nginxdemos/hello
    ports:
      - 81:80
    networks: 
      - service2
      - web

networks:
  service2:
  web:
    external: true #needs to be created by other file
    name: shared-web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in lines 20-21, in this case we configure an &lt;strong&gt;external network&lt;/strong&gt;. This means, docker-compose won't try to create it and will fail if it is not available. But once it is, it will just reuse it.&lt;/p&gt;

&lt;p&gt;That's it. &lt;strong&gt;Service1 and 2 web containers can reach each other, but their databases are separated.&lt;/strong&gt; Both can also be developed in separate repositories.&lt;/p&gt;

&lt;p&gt;As an extension to the above, you can take a look at container &lt;a href="https://docs.docker.com/compose/compose-file/compose-file-v3/#aliases" rel="noopener noreferrer"&gt;aliases&lt;/a&gt; to make routing easier, or &lt;a href="https://docs.docker.com/compose/compose-file/compose-file-v3/#internal" rel="noopener noreferrer"&gt;internal&lt;/a&gt; to even more isolate services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chaos testing
&lt;/h2&gt;

&lt;p&gt;As you know, when it comes to an outage, &lt;em&gt;the question is not if it will happen, but when&lt;/em&gt;. It's always better to prepare for such scenarios and test how the system behaves in case of different issues.&lt;br&gt;
What will happen if some packets are dropped, or the latency goes up? Maybe a service goes offline?&lt;/p&gt;

&lt;p&gt;Chaos testing is all about getting prepared for this.&lt;/p&gt;

&lt;p&gt;I highly recommend looking at &lt;a href="https://github.com/alexei-led/pumba" rel="noopener noreferrer"&gt;Pumba&lt;/a&gt; a project that lets you pause services, kill them, but also add network delay, loss, corruption etc.&lt;/p&gt;

&lt;p&gt;Fully describing pumba would take lots of time, so let's just have a look at a very simple network delay simulation.&lt;/p&gt;

&lt;p&gt;Let's spin up a container that pings 8.8.8.8:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it --rm --name demo-ping alpine ping 8.8.8.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, looking at the output, run the following command in a separate console tab:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pumba netem --duration 5s --tc-image gaiadocker/iproute2 delay --time 3000 demo-ping
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00n15lxmnj950jtzv10m.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00n15lxmnj950jtzv10m.gif" alt="Chaos monkey example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it!&lt;/p&gt;

&lt;p&gt;You can also implement other chaos tests within minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Docker and docker-compose are great tools to emulate different network configurations without the need for setting up servers or virtual machines. The commands and configs are pretty easy and simple to use. Combined with external tools like Pumba you can also test problematic situations and prepare for outages.&lt;/p&gt;

&lt;p&gt;If you are interested in &lt;a href="https://accesto.com/blog/what-is-docker-and-why-to-use-it/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;, check out my e-book: &lt;a href="https://accesto.com/books/docker-deep-dive/" rel="noopener noreferrer"&gt;Docker deep dive&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
    </item>
    <item>
      <title>Docker networks explained - part 1</title>
      <dc:creator>Michał Kurzeja</dc:creator>
      <pubDate>Mon, 21 Feb 2022 07:02:31 +0000</pubDate>
      <link>https://forem.com/accesto/docker-networks-explained-part-1-3mk1</link>
      <guid>https://forem.com/accesto/docker-networks-explained-part-1-3mk1</guid>
      <description>&lt;p&gt;Have you ever wondered how networks in Docker work? Maybe you are interested in the less known things that you can do with the networking layer with Docker? Here are some interesting facts and use cases, that might help in every-day use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposing ports
&lt;/h2&gt;

&lt;p&gt;Let’s start with the basics. Ports exposing is the most commonly used thing around Docker networking, but do you know all the things about it?&lt;/p&gt;

&lt;p&gt;Let’s take a look at a simple command like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 127.0.0.1:80:8080/tcp ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What does this mean? Well, might be tricky, but it means:&lt;br&gt;
Listen for TCP connections on 127.0.0.1 on port 80, and forward the traffic to port 8080 inside the container.&lt;/p&gt;

&lt;p&gt;If we simplify that a bit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 80:8080/tcp ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have omitted the IP part, and now Docker will listen on all interfaces, so it will be possible to access the service from the outside.&lt;/p&gt;

&lt;p&gt;We can change the notation even more, and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 80:8080/udp ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will forward udp connections. Another option is sctp but it is not widely used for web related stuff. TCP is  obviously the most common one, so if we skip the /protocol part - it will be set to TCP by default.&lt;/p&gt;

&lt;p&gt;And what happens if we run just this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080 ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will forward TCP traffic, from a randomly chosen port to port 8080 in the container. Wait? Randomly?! How do I know which port was used?&lt;/p&gt;

&lt;p&gt;Just take a look at docker ps - there is a column for that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
2f82dac833ae        mariadb:10.3        "docker-entrypoint.s…"   10 days ago         Up 9 hours          3306/tcp               project_db_1
86f00e7f41a2        phpmyadmin          "/docker-entrypoint.…"   10 days ago         Up 9 hours          0.0.0.0:8080-&amp;gt;80/tcp   project_phpmyadmin_1
31ea70729fbf        redis:6             "docker-entrypoint.s…"   6 weeks ago         Up 9 hours          6379/tcp               project_redis_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the PORTS column is all you need plus more. It also shows exposed ports that are not forwarded. Such port is not accessible from the outside (except docker networks, but this will come alter),  but you get this info in case you would like to forward it.&lt;/p&gt;

&lt;p&gt;You can also run &lt;code&gt;docker port&lt;/code&gt; to check mapping for a given container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker port project_phpmyadmin_1
80/tcp -&amp;gt; 0.0.0.0:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But it is missing the part for not mapped ports, so I guess you won’t use that command too often ;)&lt;/p&gt;

&lt;p&gt;There is one last  thing that we need to mention, and that is the -P argument for docker run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;-P&lt;/span&gt; ubuntu bash 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;-P&lt;/code&gt; exposes all ports mentioned in the Dockerfile on random ports on the host machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting containers
&lt;/h2&gt;

&lt;p&gt;Let's start a simple web-server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name test_web nginx:alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we then lunch a second container, lets say ubuntu, install curl on it and try to access the web page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; ubuntu bash
apt update
apt &lt;span class="nb"&gt;install &lt;/span&gt;curl
curl test_web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wont work, the name is not resolved! We could make use of &lt;code&gt;docker inspect&lt;/code&gt; and check that the IP of the &lt;code&gt;test_web&lt;/code&gt; container is &lt;code&gt;172.17.0.2&lt;/code&gt;, then run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl 172.17.0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it would work. So the connectivity is limited, but it is possible.&lt;/p&gt;

&lt;p&gt;If you are familiar with docker-compose, this might be confusing. Services inside docker-compose can easily communicate with each other using their names! If you have a simple file like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  db:
    image: mariadb:10.3
    environment:
      ...
  phpmyadmin:
    image: phpmyadmin
    restart: always
    ports:
      - 8080:80
    environment:
      - PMA_HOSTS=db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the &lt;code&gt;phpmyadmin&lt;/code&gt; can obviously connect to the db service using its name - &lt;code&gt;db&lt;/code&gt;!&lt;br&gt;
The reason for that is quite simple, it works for containers within the same network, &lt;strong&gt;except&lt;/strong&gt; for the default one.&lt;/p&gt;

&lt;p&gt;Lets give it a try!&lt;/p&gt;

&lt;p&gt;We can create a new network by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and connect existing containers to it running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network connect test test_web
docker network connect test NameOfYourBashContainer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I assume the test bash container is still alive, you just need to use its name in the second line above. Now switching back to it and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl test_web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Will work! Notice that &lt;strong&gt;a port does not need to be exposed in order to access it&lt;/strong&gt; from a second container as long as both containers share the same network.&lt;/p&gt;

&lt;p&gt;Disconnect one of the containers from the newly created network by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network disconnect test NameOfYourBashContainer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the name wont be resolved anymore!&lt;/p&gt;

&lt;p&gt;It is also possible to connect a container to a network while creating it, just pass the network as one of the input options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -t -i --rm --network test ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use cases
&lt;/h3&gt;

&lt;p&gt;One sample use cases would be to connect different projects or microservices, without having all of the running containers in one network. This allows to quite freely adjust the matrix of connections between containers. You can use this to test security rules that are configured on production - ex. on AWS.&lt;/p&gt;

&lt;p&gt;Another use case would be to test connection issues (chaos monkey). We will cover a better approach to this in follow up articles, but networks will do the work for basic scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  More info soon
&lt;/h3&gt;

&lt;p&gt;I plan to write follow up article with networks in docker-compose, microservices and simulating network issues (chaos monkey). Subscribe to our newsletter to make sure you won't miss them. &lt;br&gt;
&lt;br&gt;&lt;br&gt;
Update: &lt;a href="https://accesto.com/blog/docker-networks-explained-part-2/"&gt;Docker Networks - part 2&lt;/a&gt; is now released.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
Also, if you are interested in more advanced Docker techniques, check my recent eBook - &lt;a href="https://accesto.com/books/docker-deep-dive/"&gt;Docker Deep Dive&lt;/a&gt;.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

</description>
      <category>docker</category>
    </item>
  </channel>
</rss>
