<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matthieu ROBIN</title>
    <description>The latest articles on Forem by Matthieu ROBIN (@matthieurobin).</description>
    <link>https://forem.com/matthieurobin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/matthieurobin"/>
    <language>en</language>
    <item>
      <title>Why You Should Deploy Your Code to Production with Hidora</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Mon, 07 Feb 2022 14:43:09 +0000</pubDate>
      <link>https://forem.com/matthieurobin/why-you-should-deploy-your-code-to-production-with-hidora-2h5f</link>
      <guid>https://forem.com/matthieurobin/why-you-should-deploy-your-code-to-production-with-hidora-2h5f</guid>
      <description>&lt;p&gt;Development and production environments may be different, but once you push your code to production, it’s time to put the pedal to the metal and release your app or website to the world. And that’s where Hidora comes in. By deploying your code to production with Hidora, you can push code updates without worrying about infrastructure management. It’s fast, easy, secure, and reliable—the perfect way to release new features or fixes with minimal disruption to your end users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8LheRRXt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjprnxc223bgvidv8hoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8LheRRXt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjprnxc223bgvidv8hoh.png" alt="Image description" width="880" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Hidora?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://hidora.io/services/paas"&gt;Hidora&lt;/a&gt; is an easy-to-deploy platform that allows you to push your code from development directly into production. If you’re wondering what exactly that means and how it can help your business, read on for a closer look at how this development tool works. What Does Deploying Mean? Before diving in, it’s important to start by defining some key terms so we can fully understand what deploying code is all about. For starters, let’s go over what production environment means and why developers should deploy their code there. Simply put, a production environment is an area of technology used in an operational capacity—it isn’t meant for testing or trial purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hidora.io"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YavsGZln--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ilzyqqzy7mkv4pauw26.png" alt="Hidora" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it help with development?
&lt;/h2&gt;

&lt;p&gt;The Hidora app is a convenient way to push your code in production. With just a few clicks, you can deploy an application for any platform or stack that runs in Jelastic. And, since everything you need is hosted on our elastic cloud platform, there’s nothing else for you to install or configure. This saves time and allows you to focus on what matters most: writing great code. After all, you’re a developer—not DevOps! Our innovative approach combines development tools like GitLab, Container Registry, and Docker Hub together with deployment automation software such as GitLab—giving developers fast access to testing environments so they can build high-quality apps faster than ever before. You get access to hundreds of different runtimes without having to worry about managing them manually or spending money on dedicated server hosting. Use our simple but powerful API to test new technologies instantly in simulated production environments—like Node.js, PHP, Python, and more —that use up-to-date databases (such as PostgreSQL, MariaDB, and more) and storage engines for compatibility testing purposes prior to rolling out into your real production environment. It couldn't be easier!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9t5SCNF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozyh69en498ysyjs0uob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9t5SCNF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozyh69en498ysyjs0uob.png" alt="Hidora PaaS Description" width="880" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Which methods can help me to deploy code into my applications?
&lt;/h2&gt;

&lt;p&gt;Our platform supports different ways to do an automatic deployment, allowing you to choose the best option depending on your applications:&lt;br&gt;
Dashboards - you can easily deploy URLs or archive into your applications&lt;br&gt;
VCS - allows deploying from your VCS repository (Bitbucket, SVN, JFrog)&lt;br&gt;
Docker registry - deploy your own image based on your docker registry (Docker hub, Gitlab, Nexus)&lt;br&gt;
Plugins - deploy your code by using software  development tools ( Eclipse, Maven, IntelliJ IDEA)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EbP0B48t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cmjj3xqmhufol7fci69i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EbP0B48t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cmjj3xqmhufol7fci69i.png" alt="Projet-Env-PaaS" width="880" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it help you after deployment?
&lt;/h2&gt;

&lt;p&gt;Easy to Deploy your application in production easily, With minimal infrastructure configuration. In case of any conflicts with your current configurations, we will notify you. If you have multiple environments, we can automatically deploy it on each of those environments. We have a dashboard that lets you view your live app data and performance metrics. Jelastic is providing a very easy way to deploy and monitor both frontend and backend apps by collecting metrics from all apps together. Free hosting on cloud scale - The deployment is done at zero cost! Additional free apps for monitoring: if you require something more specific like a tool for NodeJs monitoring or a private npm registry for your app, we’ve got them covered! Just install any app from marketplace within one click inside the admin panel and play around with it in your local dev environment before pushing its changes into production. As an example take Prometheus + Grafana – super powerful graphs for every aspect of the real-time performance of your app including custom dashboards, configurable alerting, and scheduling features. A marketplace is full of useful applications that won't interfere with existing ones. Don't waste time setting up 3rd party services; let us do that job for you! Scale things up on demand – Automatically add capacity when your app requires it (or when you want). It can be quick as 1 click process if nothing happens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P6pxZjSD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek1bhajddrngw44gbl5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P6pxZjSD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek1bhajddrngw44gbl5n.png" alt="Hidora - Pay Per Use" width="880" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>hosting</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Opensearch: The Secret to Better Observability</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Mon, 17 Jan 2022 13:33:35 +0000</pubDate>
      <link>https://forem.com/matthieurobin/opensearch-the-secret-to-better-observability-4n54</link>
      <guid>https://forem.com/matthieurobin/opensearch-the-secret-to-better-observability-4n54</guid>
      <description>&lt;p&gt;As development teams continue to adopt microservices and distributed systems, observability becomes increasingly important to managing services, troubleshooting issues, and keeping track of your production environment. With the rise of cloud computing, the cost of monitoring has never been lower; however, observability solutions tend to be costly and can be difficult to set up. Opensearch offers an open-source solution that promises to make observability easier than ever before. Here’s how it works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y1Iyz4eF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxb4zwbrru9i774avnzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y1Iyz4eF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxb4zwbrru9i774avnzl.png" alt="Opensearch Hidora" width="768" height="637"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MnLyz_Id--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7yhn5ovcjmv968sfk8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MnLyz_Id--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7yhn5ovcjmv968sfk8t.png" alt="Opensearch Hidora" width="768" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is observability?
&lt;/h2&gt;

&lt;p&gt;In a nutshell, observability is about how your application works from a technical perspective. It encompasses tools and techniques that allow you to gain visibility into your systems, allowing you to understand their behavior and identify abnormalities. On one hand, observability is a new term for something that’s been around for a long time. Logging has been around since logging was invented. After all, if it wasn’t useful there wouldn’t be much point in it! But as technology has evolved from batch-oriented monolithic applications on physical hardware towards distributed systems deployed as code on virtualized infrastructure, new challenges have emerged with regards to troubleshooting and debugging those systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nIbZAH-M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxheejhlhi7li4f0amkn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nIbZAH-M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxheejhlhi7li4f0amkn.png" alt="Opensearch Hidora" width="880" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why opt for opensearch?
&lt;/h2&gt;

&lt;p&gt;Opensearch is a completely open and vendor-neutral standard that provides you with full observability and management of your data. It also works with your existing SIEM and analytics tools, which makes it ideal for large organizations that need a quick way to ingest data from multiple sources. For developers, opensearch provides a standard interface for interacting with metadata about code dependencies without having to write code. That means less time writing code and more time building functionality. In other words, OpenSearch helps maximize developer productivity – making it an attractive prospect for any organization big or small.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up opensearch
&lt;/h2&gt;

&lt;p&gt;If you’re deploying a new application and want it to be observable, we recommend setting up Opensearch right away. Setting up Opensearch will allow you to ingest logs from your application using search filters. This is essential for understanding what’s happening with your application in real time so that you can react quickly when things go wrong. If you have a lot of different microservices running on multiple hosts, sending data from each host independently may not scale well. Sending logs from a single source means that all of your data is in one place and saves some headache later on when you want to begin searching through it. Logging services such as Beat agent, Logstash or Fluentd can also be used instead of Opensearch if needed.&lt;br&gt;
The Jelastic certified template is created for every mentioned open-source stack (OpenSearch, OpenSearch Dashboards, Logstash). Certified images are used instead of custom ones to make all the Jelastic-specific functionality available for users (password reset, service restart, re-deploy, cloning, migration, log viewing, managing firewall). These 3 templates are united into the one auto-clustered solution by the auto-clustering JPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s3scx6jF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dkf1lgx5n831ccf7eokg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s3scx6jF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dkf1lgx5n831ccf7eokg.png" alt="Opensearch Hidora" width="547" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingest your data
&lt;/h2&gt;

&lt;p&gt;You can ingest data into OpenSearch with many useful tools, including Logstash. Logstash is an application for managing events and logs. Although it was initially created by Elasticsearch, it now has support for other products, such as Apache Kafka and Amazon Kinesis. It ingests data from nearly any source using a variety of methods, including TCP/UDP sockets and file system hooks (e.g., S3 or FTP). Once your data is in Logstash, you can run simple or complex queries on that data for better visibility into problems or trends in your application environment. Look for yourself the list of available agents: &lt;a href="https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/"&gt;https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create your first dashboard
&lt;/h2&gt;

&lt;p&gt;Create your first real-time dashboard by ingesting data from Opensearch. Follow these basic steps in order to get a real-time feed of your data in a matter of minutes. Begin by logging into your OpenSearch account and selecting a collection in which you’d like to view metrics. Next, add a search to a new or existing app in which you have an interest, such as Kubernetes. In Kubernetes, add labels for each key metric that is being collected, such as CPU usage and memory utilization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IPMezvx4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erfbsppi00tpv7blmxl0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IPMezvx4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erfbsppi00tpv7blmxl0.png" alt="Opensearch Hidora" width="768" height="414"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f8dVD2xw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38axdshqoywcw82gfop2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f8dVD2xw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38axdshqoywcw82gfop2.png" alt="Opensearch Hidora" width="574" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add alerting capability
&lt;/h2&gt;

&lt;p&gt;We saw that you can easily add alerting to Opensearch, allowing Ops teams to create notifications based on specific events. For example, if an application fails to start, an alert can be generated and sent out via email or Slack. But while Opensearch comes with a range of simple rules for finding failed instances quickly, users may want something more powerful than alerts. With our Auto-Scale feature, you can configure Opensearch so that if an instance fails multiple times in a specified period of time (e.g., three failures in 15 minutes), it is automatically scaled down! This keeps your application running efficiently while reducing your costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--raPxZjOs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqyfrzlcfexi9pbxl1n2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--raPxZjOs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqyfrzlcfexi9pbxl1n2.png" alt="Opensearch Hidora" width="880" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploy Opensearch with our Jelastic PaaS template&lt;br&gt;
We’ve developed a Jelastic PaaS template that enables you to get up and running quickly. Within minutes, you can have your own fully functional Opensearch instance deployed. Check out our guide for more information about deploying Opensearch with Jelastic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hidora.io/services/paas"&gt;Start your OpenSearch journey at Hidora&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A scalable hosting solution
</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Wed, 29 Dec 2021 13:35:05 +0000</pubDate>
      <link>https://forem.com/matthieurobin/a-scalable-hosting-solution-478d</link>
      <guid>https://forem.com/matthieurobin/a-scalable-hosting-solution-478d</guid>
      <description>&lt;p&gt;How many times have you heard the phrase the cloud and wondered what it was all about? You may have heard some myths and may have encountered some difficulties with how to figure out how to use this technology effectively. &lt;strong&gt;This article is a bit commercial, I would like to give you the opportunity to knows more about this solution&lt;/strong&gt;. In today’s article, we will discuss how &lt;a href="https://hidora.io/services/paas/"&gt;Hidora&lt;/a&gt; hosting services can help you get started in cloud computing, specifically through Jelastic solution and why it’s the best scalable hosting solution on the market today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Jelastic?
&lt;/h2&gt;

&lt;p&gt;Jelastic offers an incredibly efficient Platform-as-a-Service (PaaS) cloud that enables users to rapidly deploy, scale and manage Web applications hosted on any infrastructure they choose. The Jelastic platform runs Java  containers - including Tomcat, Jetty, GlassFish and JBoss - as well as Node.js, PHP, Ruby, Python and Golang application servers. It's not just a PaaS offering though: Jelastic provides several highly advanced features such as full stack support, automatic scaling and automated failover that no other PaaS can match. On top of all that, it boasts an extensive API allowing developers to customize their deployment process without changing their codebase or even needing to log in to a separate management interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Jelastic?
&lt;/h2&gt;

&lt;p&gt;Most developers are familiar with Heroku, AWS and other top PaaS providers, but may not be as familiar with Jelastic. Here’s why you should give it a try: &lt;br&gt;
(1) It offers a much lower cost than its competitors. &lt;br&gt;
(2) It supports multiple frameworks (Ruby on Rails, PHP, NodeJS and others). &lt;br&gt;
(3) It supports both Linux and Windows applications – giving you more choice of programming languages. &lt;br&gt;
(4) It includes built-in load balancing – making scaling up your application a simple process. &lt;br&gt;
Finally, Hidora also has a Managed Service for companies that want an additional layer of support, plus automated backups to a second datacenter. What are you waiting for?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Applications Can You Host On Hidora?
&lt;/h2&gt;

&lt;p&gt;Many of our customers are looking for a system that gives them an easy-to-use way to scale up their applications. Jelastic offers horizontal scalability and auto-scaling, and has free users as well as paid, enterprise users. This means you can run your web applications such as WordPress blogs, forum websites, static websites and more on Hidora with Jelastic without having to pay very low if you’re just starting out with low traffic levels. For higher traffic levels we offer a tiered pricing structure depending on how much RAM and CPU you need in order to get going quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Much Does it Cost to Deploy an Application on Hidora?
&lt;/h2&gt;

&lt;p&gt;Unique approach to resource allocation and pricing.&lt;br&gt;
Hidora’s scalable usage-based charging approach is able to adapt to your needs and requirements. The system automatically measures how many resources are consumed hourly and requests the payment only for real usage. No fixed plans, no overpayments!&lt;br&gt;
Turn off your environments during weekends, nights, or seasons to improve your hosting cost efficiency even more.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do I Get Started Hosting On Jelastic?
&lt;/h2&gt;

&lt;p&gt;Hidora provides you with three simple steps to get started, create your account, select a Linux application server that meets your needs, and deploy web apps on it. Let’s walk through those three steps together. First of all, sign up for an account on &lt;a href="https://hidora.io/services/paas/"&gt;Hidora&lt;/a&gt;. We will provide you with one-click access to our Jelastic dashboard with full management capabilities for your entire infrastructure – web applications and services - deployed on any of our supported datacenters, Geneva or Gland &lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>10 Ways to Improve Your PHP Security
</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Mon, 29 Nov 2021 19:45:27 +0000</pubDate>
      <link>https://forem.com/matthieurobin/10-ways-to-improve-your-php-security-42e2</link>
      <guid>https://forem.com/matthieurobin/10-ways-to-improve-your-php-security-42e2</guid>
      <description>&lt;p&gt;You’ve been using PHP for years, and it seems to work just fine, but have you ever wondered what more you could be doing to keep your scripts secure? As security breaches become more common and more destructive, there are a number of best practices you can implement in your scripts to make sure they’re safe from hackers of all levels. Here are ten ways you can improve your PHP security today, starting with the obvious ones and moving into slightly less-common areas that could save you from disaster in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Change Default Ports
&lt;/h2&gt;

&lt;p&gt;The majority of web servers run on ports 80 and 443. If your PHP application is exposed on either of these ports, you should change them as soon as possible to something like: 1141 (443) and 1142 (80). There are a few reasons for changing these ports. First, it makes it much more difficult for an intruder to gain access because most people would assume that both of these numbers have already been used up. Second, it prevents conflict with other services running on these ports. Third, some ISPs may block port 80/443 if they aren’t already in use—but just about any ISP will allow port 1143 or higher traffic through without restriction. Changing your default ports doesn’t mean that you shouldn’t put any security measures in place; they should still be implemented but on non-standard HTTP/HTTPS channels.&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Use HTTPS at All Times
&lt;/h2&gt;

&lt;p&gt;Many websites aren’t using HTTPS, even though they should be. It is important that your website be secure at all times by making sure you use HTTPS instead of HTTP. This will protect your data and personal information by encrypting it so that it cannot be read by others over an unsecured connection. HTTPS is especially important for sites that handle sensitive data like credit card numbers or health information; if someone intercepts your traffic, your data can easily become compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  3) Use Strong Passwords
&lt;/h2&gt;

&lt;p&gt;There’s no reason not to use strong passwords that contain a mix of numbers, letters, and special characters. A great password should be long (ideally more than 12 characters) and contain at least one number and one symbol. And remember: These days it’s often safer to use two different passwords instead of using a passphrase. Make sure you use different passwords for your various logins; if someone finds out your login information for one site they can easily compromise your entire digital life with just a few clicks. Consider password managers like LastPass and KeyPass which allow you to store all of your passwords in an encrypted database that syncs across all of your devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  4) Disable Remote Code Execution
&lt;/h2&gt;

&lt;p&gt;If remote code execution is enabled, attackers can execute arbitrary code on your server. This should be disabled at all times. Because of its potential consequences, remote code execution should be one of your top priorities when securing a web application. Ensure that you have properly configured settings to disable it. Disable It For All Files If possible, don’t just disable it for .php files—disable it for all file types so that attackers can’t use other kinds of files as backdoors into your site. As an added bonus, turning off remote code execution will reduce your exposure to drive-by downloads and other kinds of malicious scripts loaded from untrusted sources on your website. Evaluate Third-Party Plugins Third-party plugins are notorious for being full of security vulnerabilities because they’re not subject to your usual quality control measures. Consequently, if you don’t develop custom functionality yourself, third-party plugins are very likely to contain mistakes that could cause security problems in your site. Consider using only high quality modules with track records of maintaining secure code or proprietary code developed in house by experienced developers with high levels of expertise in web application security. Check Web Server Configuration Options Although they aren't necessarily directly related to PHP vulnerabilities, there are certain configuration options within Apache which are good general practices in terms of security because they restrict what various processes running on your system are allowed or not allowed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  5) Enable ModSecurity or Other WAF
&lt;/h2&gt;

&lt;p&gt;Even if you’re not a web application developer, one of your biggest responsibilities as an information security professional is helping developers implement secure coding practices. This is why it’s so important that you become familiar with WAFs, or web application firewalls, and learn how to enable them for your organization. Even a little bit of knowledge can go a long way towards making your organization more secure. For example, by enabling ModSecurity on all of your web servers (and Apache load balancers), you dramatically increase security—and peace of mind—for everyone involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  6) Rely on Built-in Security Features
&lt;/h2&gt;

&lt;p&gt;From safe_mode to open_basedir, there are lots of ways that you can improve your site’s security by taking advantage of built-in features. For example, if you’re creating a new file upload function, use validate_file_name() rather than doing it yourself. When in doubt, see what WordPress or another popular framework is doing. If their code doesn’t have an obvious vulnerability, yours probably won’t either. Don’t reinvent what has already been written. Use Built-In Functions: There are so many functions available for every language out there! Make sure you learn them and use them! Save time on trying to write something complicated when someone else has already solved it for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  7) Don't Run Admin Tools on Production Servers
&lt;/h2&gt;

&lt;p&gt;Sure, running some administrative tools might be handy on your development server, but they're just asking for trouble if you run them on a production server. Tools like phpMyAdmin and Adminer give potential hackers an avenue of attack that, with enough knowledge and persistence, can lead directly to your database. Just don't do it. If you need remote access to your database for development or debugging purposes, use SSH tunneling instead. For MySQL users who want GUI access, check out LAMP stack offerings like PhpMyAdmin Lite or Adminer; both are solid alternatives that offer good data security practices. Or better yet, stick with command-line access so you can completely protect your data from any malicious intent by placing crucial connections outside of a Web environment. Think again before allowing passwordless accounts: It may seem convenient for a developer to set up a root account without a password on his local dev box—but that's exactly why many security pros warn against using them in production environments. When databases aren't secured properly, anyone who has login information can easily exploit SQL Injection vulnerabilities as well as run commands as any user they want (even as root). Even worse, those same people could potentially work their way through sudo permissions all the way up to root once access is granted via login credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  8) Check Third Party Libraries Before Use
&lt;/h2&gt;

&lt;p&gt;If you use any third-party libraries in your application, such as those that parse uploaded files, then it’s extremely important that you first check these libraries for known vulnerabilities. While it might be easier to use a pre-written library than spend time writing your own parsing code, we can never guarantee we won’t introduce new security vulnerabilities by relying on external code and will only improve our application’s security if we check everything before we integrate it into our project. Find out more: How to audit third party libraries for security flaws.&lt;/p&gt;

&lt;h2&gt;
  
  
  9) Move Configuration Files Out of Web Root
&lt;/h2&gt;

&lt;p&gt;These files can contain all sorts of sensitive data, including MySQL passwords. If a hacker obtains access to these files through a vulnerability in your application, it’s as easy as modifying a few lines of code and running a single command. Keep configuration files out of web root by placing them outside your website’s document directory—in wp-content/config , for example. Better yet, put them on another server entirely. If you use continuous integration tools like Gitlab CI or Github, place config files on a private repository and don’t push those repositories to public servers. Then set up an automated process that updates config files each time you deploy new code.&lt;/p&gt;

&lt;h2&gt;
  
  
  10) Keep PHP Up To Date
&lt;/h2&gt;

&lt;p&gt;Keeping your code up-to-date is a surefire way to make sure you aren’t vulnerable to known vulnerabilities. If you’re building something from scratch, try using Composer for dependency management; it can be integrated with GitLab CI for automated deployments. If you’re working on an existing project or want to add one line of code at a time, don’t forget about git diff and git pull requests; these features make it easy to ensure everyone knows what they’re getting into when updating their local environments or files. You might also consider creating a process that requires people to review and sign off on any changes. Ultimately, knowing who made changes and when (and how) will help you track down vulnerabilities sooner instead of later. Don’t wait until someone tells you there's a problem—take precautions proactively! And speaking of notifications: never miss them again with GitLab Issue Notifications or Mattermost Notifications.&lt;/p&gt;

</description>
      <category>security</category>
      <category>php</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Node.js Hosting Requirements &amp; Service Provider Selection Tips</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Thu, 18 Nov 2021 07:54:25 +0000</pubDate>
      <link>https://forem.com/matthieurobin/nodejs-hosting-requirements-service-provider-selection-tips-lpn</link>
      <guid>https://forem.com/matthieurobin/nodejs-hosting-requirements-service-provider-selection-tips-lpn</guid>
      <description>&lt;p&gt;Having no idea what Node.js hosting requirements should be covered for your app?&lt;/p&gt;

&lt;p&gt;Looking for the best Node.js hosting platform?&lt;/p&gt;

&lt;p&gt;In this post, we’ll take in-depth look at all the aspects and subtleties you should analyse to choose the best option.&lt;/p&gt;

&lt;p&gt;Also we’ll provide a step-by-step tutorial on how to get Node.js applications hosted in the cloud on the example of Ghost publishing platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Developers and Enterprises Prefer Node.js&lt;/strong&gt;&lt;br&gt;
Due to its simplicity, Node.js is gaining more and more popular among developers all over the world.&lt;/p&gt;

&lt;p&gt;JavaScript code is quite easy-to-understand even for non-professionals, as well as the open-source platform facilitates the application development process.&lt;/p&gt;

&lt;p&gt;Moreover, the Node Package Manager includes tons of pre-built modules, which accelerate development speed even more. Scalability, reduced response time and the ability to use the same language on server and client sides are also the proven benefits of Node.js.&lt;/p&gt;

&lt;p&gt;As for now most businesses and industries choose Node.js for building their projects. Such a runtime environment is perfect for modern applications because it scales quite well without additional investments into hardware.&lt;/p&gt;

&lt;p&gt;REST APIs, real-time apps, single-pages and more can be easily built and executed on almost all known platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing the Best Node.js Hosting Provider&lt;/strong&gt;&lt;br&gt;
Usually, once your Node.js application is ready to go live, you start looking for reliable and secure hosting for it.&lt;/p&gt;

&lt;p&gt;In this guidance, we’ll go through the main aspects you should weigh before choosing the right &lt;a href="https://hidora.io"&gt;hosting platform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Not all the providers support an event-driven JavaScript runtime environment and development framework, so you should do careful research to find the best Node.js hosting for your app.&lt;/p&gt;

&lt;p&gt;First of all, you should realise if you have enough time and skills for the system administration routine. If yes, you can get some cloud VM or VPS and install, deploy and manage everything on your own. But, if you prefer to focus on application code than a managed cloud hosting platform would be a perfect choice.&lt;br&gt;
Estimate the expected traffic. The busier your site is the greater requirements should be set. VPS or some shared servers are a good and cheap solution for the start, but your investments will continue to rise along with the traffic.&lt;br&gt;
Ensure the reliability of services you research. The right cloud hosting platform should be located in datacenters with proven reliability.&lt;br&gt;
Check data center locations in order to get the fastest access.&lt;br&gt;
Mind the support and the cost of vertical and horizontal scaling.&lt;br&gt;
Ensure the clustering is supported to prevent failed transactions, error-filled shopping carts and lost work of the users.&lt;br&gt;
Consider technology shifts and other possible modifications to have a portable website without any lock-in.&lt;br&gt;
Foresee the in-built tools and frameworks you may need for site management and monitoring.&lt;br&gt;
Compare additional benefits you can get from each Node.js hosting provider, e.g: SSL certificates, domains, etc.&lt;br&gt;
Check out the uptime and downtime performance of each analysed provider.&lt;/p&gt;

&lt;p&gt;Good luck ;-)&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>hosting</category>
    </item>
    <item>
      <title>Jelastic and Docker containers: A marriage made in the cloud for developers</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Wed, 17 Nov 2021 14:11:41 +0000</pubDate>
      <link>https://forem.com/matthieurobin/jelastic-and-docker-containers-a-marriage-made-in-the-cloud-for-developers-4184</link>
      <guid>https://forem.com/matthieurobin/jelastic-and-docker-containers-a-marriage-made-in-the-cloud-for-developers-4184</guid>
      <description>&lt;p&gt;Into containers? No? Well, you should be! Containers like Docker can help you automate and speed up your development process, but there's more to the story than that. You see, it's not just developers who benefit from containers; businesses that use cloud hosting do too! So let's explore how Jelastic and Docker containers can make your life easier and your costs lower... starting with the basics! (Note: this article assumes familiarity with both Jelastic and Docker.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Choose Jelastic&lt;/strong&gt;&lt;br&gt;
If you're new to cloud hosting, or simply not that familiar with Jelastic, you may be wondering why Jelastic is worth your time. Well, for starters, it offers one of the most fully-featured DevOps capabilities on any public cloud platform. Couple that with its tight integration with Docker—which is quickly becoming one of today's hottest technologies—and it's easy to see why Jelastic is so attractive to application developers. It also doesn't hurt that Jelastic comes from a company whose history and expertise spans more than 10 years. That kind of deep domain knowledge matters. More than anything else, though, what makes Jelastic such a great choice for developers is its usability (UI) and ease of management (EM). After all, nobody wants to spend their time managing their host; they want to focus on developing applications that will help their business grow while keeping IT costs low. That's why we built our user interface around our customers; we think we have what many would call a killer UI, which means your developer team can concentrate less on managing hosts and more on building scalable apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Support&lt;/strong&gt;&lt;br&gt;
If your enterprise is already using Docker on-premise or even in a public IaaS cloud, you can still leverage Jelastic's powerful platform. With Jelastic's Docker integration, you have seamless access to native containerized applications across multiple environments – both private and public – from a single portal. Since everything runs within a system container, you don't need a full virtual machine for each application anymore. This drastically reduces overhead costs associated with VMs and makes maintenance easier. That way, any changes that happen inside one of these containers do not affect anything else running in them on different hosts. Think of it as one address space per container where all your containers will share OS resources yet remain completely isolated from one another. Plus, since VMs are hardware dependent, keeping track of their version updates can be tricky at times; while with Docker based applications it is almost effortless because they run inside kernel space alongside host OS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GZSKJphQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwls7r53tznrlua8y2xs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GZSKJphQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwls7r53tznrlua8y2xs.png" alt="Deploy Docker on Jelastic" width="880" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is Docker?&lt;/strong&gt;&lt;br&gt;
Docker is a technology that allows you to develop, run, test, and deploy applications by using containers. Containers work at a higher level than VMs because they don't have a virtualized operating system. In fact, since each container runs its own application with its own set of libraries, it's almost as if each application has its own instance of an operating system. That helps to keep your data isolated from other applications running on a single host—even if those other applications were written in different languages or come from different sources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XhJ3aIF_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vch34g8vaovohiwd819y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XhJ3aIF_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vch34g8vaovohiwd819y.png" alt="Deploy Docker on Hidora" width="880" height="545"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;How Does It Work?&lt;/strong&gt;&lt;br&gt;
Jelastic has support for running Docker containers natively within its platform. It is important to note that Jelastic is not a container management solution, but rather it runs Docker inside system level containers on top of KVM virtualization engine. The point of doing so is increased performance since these internalized containers share host OS kernel, thus relying on fewer resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UAgNFSNA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjgbdiiwky3p2034qnhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UAgNFSNA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjgbdiiwky3p2034qnhp.png" alt="Kubernetes as a Service" width="880" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Running Containers on Jelastic Platform&lt;/strong&gt;&lt;br&gt;
• Running Docker Containers on &lt;a href="https://hidora.io"&gt;Hidora&lt;/a&gt; is optimized with vCPU, RAM and Storage resources. &lt;br&gt;
• Jelastic takes care of all hardware failures, allowing them to be removed from developers' focus. &lt;br&gt;
• Developers only need to consider how many instances they want for their applications and can balance their needs with resource availability. This is highly flexible as instances auto-scale up or down based on resource demands. &lt;br&gt;
• Both small and large development teams will enjoy uniform experience across different deployments of applications across various platforms such as .NET Core, NodeJS, Java,  PHP, Python, Ruby and more without any VM sprawl. &lt;br&gt;
• Using a built-in Load Balancer that distributes traffic among all available services allows developers to deploy scalable distributed applications easily without additional efforts. &lt;br&gt;
• Replicated instance groups ensure zero data loss in case of instance failure, making application fault tolerance seamless. &lt;br&gt;
• Auto Scaling groups guarantee efficient usage of server capacity by balancing load-distribution requests among different servers rather than overloading just one system. However, even if auto scaling is not required currently, it is ready when needed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZJCgcOZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esu8xxahj73xab9p4w94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZJCgcOZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esu8xxahj73xab9p4w94.png" alt="Hidora Pay Per Use model" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>jelastic</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Multi-tenant SaaS: Where to Start? </title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Wed, 27 Oct 2021 15:09:48 +0000</pubDate>
      <link>https://forem.com/matthieurobin/multi-tenant-saas-where-to-start-2lof</link>
      <guid>https://forem.com/matthieurobin/multi-tenant-saas-where-to-start-2lof</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OaOHJIBr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/austin-distel-mpn7xjkq_ns-unsplash-min.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OaOHJIBr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/austin-distel-mpn7xjkq_ns-unsplash-min.jpg" alt="Multi-tenant SaaS: Where to Start?" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Multi-tenant SaaS: Where to Start?
&lt;/h1&gt;

&lt;p&gt;Multi-tenancy is widely used in cloud computing, especially it’s a crucial feature if we talking about SaaS solutions. The idea behind the multi-tenant architecture is that one software server, database, storage or network controller can be used by multiple customers while each client’s data is hidden from others. Single-tenancy is opposite to this and means that one software instance serves one application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K25uQDcM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/tenancy-1024x536.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K25uQDcM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/tenancy-1024x536.png" alt="Multi-tenant SaaS: Where to Start?" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros &amp;amp; Cons of Multi-tenancy
&lt;/h2&gt;

&lt;p&gt;The most obvious and the most significant benefit of multi-tenancy is cutting hosting expenses through the maximally effective usage of resources, but there are some more very important advantages this architecture can give your business:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Fast and easy scaling, which allows allocating as many resources as it’s needed without downtimes and lots of administrative routines.&lt;/li&gt;
&lt;li&gt;  High level of protection from malicious software.&lt;/li&gt;
&lt;li&gt;  Software upgrades and maintenance are handled by SaaS providers.&lt;/li&gt;
&lt;li&gt;  Reduced costs and time for hardware management.&lt;/li&gt;
&lt;li&gt;  Easy integration with third-party software through the use of APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a potential downsides it’s worth mentioning limited customization options comparing with single-tenant architecture, some security and compliance issues, and the “noisy neighbor” effect, that may occur when some customer uses an inordinate amount of CPU and slows down the other tenants’ applications.&lt;/p&gt;

&lt;p&gt;To summarize all the above points single-tenant architecture provides a high level of security and customization and is a good choice for large enterprises. Multi-tenancy is a more cost-effective and highly scalable model that perfectly suits most businesses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-tenant SaaS Models
&lt;/h2&gt;

&lt;p&gt;The most common multi-tenant SaaS models are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Container-based multi-tenancy:&lt;/strong&gt; as a rule in containerized environment each client (tenant) data is isolated, but one common application server is shared between them. However, there is a possibility to run both the app server and the database in a completely isolated tenant’s container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k4chi5KN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema1-1024x536.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k4chi5KN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema1-1024x536.png" alt="Container-based multi-tenancy" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Virtualization-based multi-tenancy:&lt;/strong&gt; such a model is very close to single-tenancy and provides a very high level of security and isolation because each customer has his own VM with application and database. In spite of this, it’s used once in a while because of poor scalability and expensiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nQi-aauM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema2-1024x536.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nQi-aauM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema2-1024x536.png" alt="Virtualization-based multi-tenancy" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database-per-tenant:&lt;/strong&gt; in this case, each tenant has its own database. Оnly the app server is shared and can be scaled vertically or horizontally if it is needed. This approach works well for several tenants but doesn’t suit apps with an unknown scale because of the huge number of databases required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DbPadVTu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema3-1024x536.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DbPadVTu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema3-1024x536.png" alt="Database-per-tenant" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single multi-tenant database:&lt;/strong&gt; this model is very popular as it has single storage for all users, which can be easily scaled up when it’s necessary. The main disadvantage of such an approach is the high risk of “the noisy neighbor” effect, which was mentioned above.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dyaRpR09--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dyaRpR09--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://hidora.io/wp-content/uploads/2021/09/shcema4.png" alt="Single multi-tenant database" width="600" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Sharded multi-tenant databases:&lt;/strong&gt; the pattern allows to store tenant data across multiple databases. Tenants’ data is divided into a set of segments (shards) and few users can use the same shard. However, this model ensures the data for any particular tenant won’t be distributed between multiple shards. It is a win-win approach to build a highly scalable application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Points to Consider Designing Multi-tenant Application
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Type of isolation:&lt;/strong&gt; as users share resources in multi-tenant environments, their security and privacy should be assured by SaaS provider. You can choose between three the most commonly used isolation models: silo, pool, bridge and tier isolations.
– Silo SaaS providers offer separate isolated clusters for each tenant. This approach is similar to single-tenancy and requires additional costs on infrastructure, development and management, but ensures a high level of privacy protection.
– Pool isolation enables users to share the same infrastructure and provides effective resource scaling but has security weaknesses.
– The bridge model is a mix of silo and pool isolations, having shared and isolated infrastructures at the same time.
– Tier-based isolation implies different types of isolation based on subscription plans, e.g.: free tier tenants use shared infrastructure while premium ones have isolated environments.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Expected resource consumption:&lt;/strong&gt; analyze carefully how many tenants you have to manage, infrastructure costs per user, storage and CPU usage, and anticipated profit. Also, keep in mind that your multi-tenant SaaS should collect a lot of resource consumption metrics very carefully and regularly.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Customization options:&lt;/strong&gt; try to examine what level of customization your tenants require to manage their environments and what level of control you need.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Customers support:&lt;/strong&gt; research very attentively what type of support your tenants might need from the environment and infrastructure, what content and resources should be shared, and who will manage their additional requests.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Infrastructure limits:&lt;/strong&gt; there should be a clear understanding of how the infrastructure that supports your multi-tenant architecture functions and if there are limitations on the resources you consume.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;SLA (Service Level Agreement):&lt;/strong&gt; this is a very important document that measures client’s expectations and helps to understand their needs. It definitely should contribute to your multi-tenancy model.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Reputable hosting provider:&lt;/strong&gt; finding hosting for your SaaS that is powerful and scalable enough to ensure smooth and secure access to software for clients can be a real challenge. Hidora Cloud covers all these needs and even more: our inspired experts will help to build the infrastructure for your app.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Building the right multi-tenant SaaS architecture is a crucial factor that affects the quality of service provided and your business overall. Consider all the mentioned above basic factors from the very beginning to have a clear understanding of what software you are developing and all your customers’ needs.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>webdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Kubernetes-based development with Devspace</title>
      <dc:creator>Matthieu ROBIN</dc:creator>
      <pubDate>Fri, 15 Oct 2021 16:48:36 +0000</pubDate>
      <link>https://forem.com/matthieurobin/kubernetes-based-development-with-devspace-3d77</link>
      <guid>https://forem.com/matthieurobin/kubernetes-based-development-with-devspace-3d77</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes-based development with Devspace
&lt;/h1&gt;

&lt;p&gt;Modern applications base more and more on micro-services. Splitting large applications into smaller pieces makes the whole more maintainable and easier to develop. However, instead of developing a big monolith, we work on a bunch of tiny applications, making it more challenging to debug and deploy the whole system. Luckily, there are many tools out there to help us out. An interesting comparison of some of them can be found &lt;a href="https://kubevious.io/blog/post/kubernetes-development-tools-comparison-skaffold-vs-devSpace-vs-draft-vs-codeready-vs-bridge" rel="noopener noreferrer"&gt;here&lt;/a&gt;. In what follows, we want to see how easy it is to do Kubernetes-based development with &lt;a href="https://devspace.sh" rel="noopener noreferrer"&gt;devspace&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  A micro-services application
&lt;/h2&gt;

&lt;p&gt;Suppose we are developing a micro-services application, for example an e-shop. In essence, our e-shop consists of a frontend application that communicates with a backend through an API. For the sake of simplicity, let's say that our backend looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhidora.io%2Fwp-content%2Fuploads%2F2021%2F05%2Fbaseline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhidora.io%2Fwp-content%2Fuploads%2F2021%2F05%2Fbaseline.png" alt="baseline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;User management is handled by the &lt;code&gt;iam-service&lt;/code&gt;. Orders are processed via the &lt;code&gt;message-queue&lt;/code&gt;. Most of our backend's business logic is packed in serverless functions served by the &lt;code&gt;faas&lt;/code&gt;. Our application's state is held in our &lt;code&gt;database&lt;/code&gt;. Finally, for some good reasons (e.g. the ease of testing setup), we are developing our software in a &lt;a href="https://www.atlassian.com/git/tutorials/monorepos" rel="noopener noreferrer"&gt;monorepo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With time, our micro-services application will necessarily contain a lot of business logic that will be packed in even more micro-service code or serverless functions. For example, we might need a connector service between our &lt;code&gt;message-queue&lt;/code&gt; and our &lt;code&gt;faas&lt;/code&gt;, or an assets service with some logic to add new assets in a controlled way. A very convenient way to host our micro-services is to dockerize them and let Kubernetes orchestrate them.&lt;/p&gt;

&lt;p&gt;Typically, our IAM service is a third-party like &lt;a href="https://www.keycloak.org/" rel="noopener noreferrer"&gt;keycloak&lt;/a&gt; or &lt;a href="https://fusionauth.io/" rel="noopener noreferrer"&gt;fusionauth&lt;/a&gt; which we can easily deploy on Kubernetes by means of a &lt;a href="https://github.com/FusionAuth/charts" rel="noopener noreferrer"&gt;helm chart&lt;/a&gt;. &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; is a very practical package manager for Kubernetes. For example, a typical fusionauth deployment would look like something along these lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add fusionauth https://fusionauth.github.io/charts
helm &lt;span class="nb"&gt;install &lt;/span&gt;fusionauth &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; fusionauth/fusionauth &lt;span class="nt"&gt;--namespace&lt;/span&gt; auth &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.protocol&lt;span class="o"&gt;=&lt;/span&gt;postgresql &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.user&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;username&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.password&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;password&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.host&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.port&lt;span class="o"&gt;=&lt;/span&gt;5432 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.name&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;database-name&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.root.user&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;root-user&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; database.root.password&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;root-password&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; app.runtimeMode&lt;span class="o"&gt;=&lt;/span&gt;production &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; search.engine&lt;span class="o"&gt;=&lt;/span&gt;database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our message queue is probably &lt;a href="https://redislabs.com/solutions/use-cases/messaging/" rel="noopener noreferrer"&gt;redismq&lt;/a&gt;, &lt;a href="https://www.rabbitmq.com/" rel="noopener noreferrer"&gt;rabbitmq&lt;/a&gt; or &lt;a href="https://kubemq.io/" rel="noopener noreferrer"&gt;kubemq&lt;/a&gt;, for which we also easily find helm charts. &lt;/p&gt;

&lt;p&gt;Then come our own custom services for which we need to write our own Kubernetes resources (deployments, services, ingresses, etc.). Finally, we can write some kind of script to install all the necessary helm charts and apply our Kubernetes resources.&lt;/p&gt;

&lt;p&gt;Because our software deals with sensitive data and makes our business, we need to be careful when deploying a new release. Therefore, we want to somehow test it before we release it, which is very easy to do on Kubernetes clusters. Indeed, we can imagine we have two environments, one for testing and one for production. The testing (or staging) environment would be synchronized with our software repository's &lt;code&gt;main&lt;/code&gt; branch while the production environment would be the pendant of our repo's &lt;code&gt;production&lt;/code&gt; branch. We develop on the &lt;code&gt;main&lt;/code&gt; branch and, as soon as the Q&amp;amp;A is satisfied with the software pushed there, we push it to &lt;code&gt;production&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We are now in the complicated situation where we want to develop our software on a development machine, test it somehow on an almost productive environment, and release it to a production environment. That leads us to three different build and deployment procedures. On a development machine, we surely want to interact with a short-lived database. Moreover, login credentials to our microservices (like the assets service) should be trivial. On staging, we might want to grant unprotected access to some of our services, for the sake of debugging. On production, we want to secure and hide as much as possible. &lt;/p&gt;

&lt;p&gt;Finally, if our development environment was close to the production environment, we would minimize the amount of surprises following a deployment to staging or production, which would increase our productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter devspace
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://devspace.sh" rel="noopener noreferrer"&gt;Devspace&lt;/a&gt; is a cli tool that allows automation of both the build and the deployment of container images. In addition, that tool might as well replace our makefile or docker-compose configurations and provides us with the ability to do Kubernetes-based development. Because of the latter ability, let's assume we have set up a small cluster on our development machine. In one click, you can have Jelastic set up that development cluster for you&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhidora.io%2Fwp-content%2Fuploads%2F2021%2F05%2Fmarketplace.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhidora.io%2Fwp-content%2Fuploads%2F2021%2F05%2Fmarketplace.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;through a very simple interface&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhidora.io%2Fwp-content%2Fuploads%2F2021%2F05%2Fk8s-jelastic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhidora.io%2Fwp-content%2Fuploads%2F2021%2F05%2Fk8s-jelastic.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can manually set up your own &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt;, &lt;a href="https://minikube.sigs.k8s.io/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt;, or &lt;a href="https://docs.docker.com/desktop/kubernetes/" rel="noopener noreferrer"&gt;docker for desktop&lt;/a&gt; cluster.&lt;/p&gt;

&lt;p&gt;The easiest way to install devspace (not on your Kubernetes cluster, on a remote machine from which you develop your code!) is to do&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; devspace 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, depending on our use-case, we might run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;devspace init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and follow the instructions. In our particular case, we want to build &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;our API&lt;/li&gt;
&lt;li&gt;a bunch of custom micro-services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That we do with the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1beta10&lt;/span&gt;
&lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SOME_IMPORTANT_VARIABLE&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;the-important-value&lt;/span&gt;
&lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;my-custom-service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-repo/my-custom-service&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${DEVSPACE_RANDOM}&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./my-custom-service/Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
          &lt;span class="na"&gt;buildArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;SOME_IMPORTANT_VARIABLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${SOME_IMPORTANT_VARIABLE}&lt;/span&gt;
  &lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-repo/api&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${DEVSPACE_RANDOM}&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./api/Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above configuration defines how to build our API and our micro-services. When they are pushed to their docker registry, both docker images will have the same random tag (defined by the built-in variable &lt;code&gt;DEVSPACE_RANDOM&lt;/code&gt;). Instead of using a docker daemon, we can also choose to use custom build commands or &lt;a href="https://github.com/GoogleContainerTools/kaniko" rel="noopener noreferrer"&gt;kaniko&lt;/a&gt;. We can use environment variables, like &lt;code&gt;SOME_IMPORTANT_VARIABLE&lt;/code&gt; and provide the usual options to build docker images. &lt;/p&gt;

&lt;p&gt;Next, we want to deploy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;our API&lt;/li&gt;
&lt;li&gt;our custom micro-services&lt;/li&gt;
&lt;li&gt;various third-party services (iam, message queue, faas, assets)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to take care of that, we complete the previous configuration with the following snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;deployments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# for the custom service, we have regular k8s manifests&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-custom-service&lt;/span&gt;
  &lt;span class="na"&gt;kubectl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;manifests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-custom-service/manifest.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# for the api, we have written a helm chart&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api/chart&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-repo/api&lt;/span&gt;
      &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-database&lt;/span&gt;
        &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
        &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-username&lt;/span&gt;
        &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-password&lt;/span&gt;
&lt;span class="c1"&gt;# the database service is a 3rd party&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
      &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://charts.bitnami.com/bitnami&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;postgresqlDatabase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-database&lt;/span&gt;
      &lt;span class="na"&gt;postgresqlUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-username&lt;/span&gt;
      &lt;span class="na"&gt;postgresqlPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-password&lt;/span&gt;
&lt;span class="c1"&gt;# the iam service is a 3rd party&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-service&lt;/span&gt;
  &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fusionauth/fusionauth&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
        &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-user&lt;/span&gt;
        &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-password&lt;/span&gt;
        &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-database&lt;/span&gt;
          &lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root-db-username&lt;/span&gt;
          &lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root-db-password&lt;/span&gt;
      &lt;span class="na"&gt;search&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first deployment, &lt;code&gt;my-custom-service&lt;/code&gt;, amounts to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; my-custom-service/manifest.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second deployment, &lt;code&gt;api&lt;/code&gt;, is a regular helm installation. Instead of writing our own helm chart, we could have used the built-in &lt;a href="https://devspace.sh/component-chart/docs/introduction" rel="noopener noreferrer"&gt;component charts&lt;/a&gt; which offer a compromise between defining our own helm charts and keeping our Kubernetes resources configuration simple. With our current devspace configuration in place, we can start our development environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;devspace dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That command builds our docker images and deploys our software to our development Kubernetes cluster's &lt;code&gt;default&lt;/code&gt; namespace. We are now in a situation where we can develop our code on our development machine and push it to our development Kubernetes cluster. With either &lt;a href="https://devspace.sh/cli/docs/configuration/development/file-synchronization" rel="noopener noreferrer"&gt;hot reloading&lt;/a&gt; or &lt;a href="https://devspace.sh/cli/docs/configuration/development/auto-reloading" rel="noopener noreferrer"&gt;auto-reloading&lt;/a&gt;, we can even fix our code and the result is automatically propagated to our cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy to multiple environments
&lt;/h2&gt;

&lt;p&gt;Now we have a setup that works for development. We are not very far away from our staging environment setup. First, our docker images need to be tagged following the pattern &lt;code&gt;&amp;lt;my-repo&amp;gt;/&amp;lt;my-service&amp;gt;:staging-&amp;lt;commit-short-sha&amp;gt;&lt;/code&gt;. Second, our staging environment bases on external database and IAM services. Consequently, we don't want to deploy them on staging and we need to adapt the services that depend on them. In devspace, we can define &lt;a href="https://devspace.sh/cli/docs/configuration/profiles/basics" rel="noopener noreferrer"&gt;profiles&lt;/a&gt;. Until now, our configuration has no reference to any profile, therefore it is the &lt;code&gt;development&lt;/code&gt; profile. We can define the &lt;code&gt;staging&lt;/code&gt; profile, let it base on the &lt;code&gt;development&lt;/code&gt; profile and adapt it as we've just described. To do that, let's add the following configuration to our &lt;code&gt;devspace.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;profiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;staging&lt;/span&gt;
  &lt;span class="na"&gt;patches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# images -&amp;gt; adapt tag&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/images/0=${DEVSPACE_RANDOM}&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;staging-${DEVSPACE_GIT_COMMIT}&lt;/span&gt;
  &lt;span class="c1"&gt;# postgres -&amp;gt; remove, we have an external database&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;remove&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/deployments/name=postgres&lt;/span&gt;
  &lt;span class="c1"&gt;# iam service -&amp;gt; remove, we have an external iam service&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;remove&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/deployments/name=iam-service&lt;/span&gt;
  &lt;span class="c1"&gt;# api &lt;/span&gt;
  &lt;span class="c1"&gt;# -&amp;gt; we need an ingress&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/deployments/name=api/helm/values/ingress&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-cert&lt;/span&gt;
        &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-staging.my-staging-domain.com&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
      &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-tls&lt;/span&gt;
        &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;api-staging.my-staging-domain.com&lt;/span&gt;
  &lt;span class="c1"&gt;# -&amp;gt; we need up-to-date database accesses&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/deployments/name=api/helm/values/postgres&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-external-database&lt;/span&gt;
        &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-external-database-hostname&lt;/span&gt;
        &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-external-username&lt;/span&gt;
        &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-external-password&lt;/span&gt;
  &lt;span class="c1"&gt;# my-custom-service -&amp;gt; nothing to do&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can of course follow the same philosophy coupled with the concept of &lt;a href="https://devspace.sh/cli/docs/configuration/profiles/parents" rel="noopener noreferrer"&gt;parent profiles&lt;/a&gt; to define our &lt;code&gt;production&lt;/code&gt; profile. Then, building and deploying to staging or production is as simple as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;devspace deploy -p staging&lt;/span&gt;
&lt;span class="s"&gt;devspace deploy -p production&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obviously, remotely debugging those profiles is also possible. &lt;/p&gt;

&lt;h2&gt;
  
  
  We've only scratched the surface...
&lt;/h2&gt;

&lt;p&gt;Many more features are available, like custom commands definition, port-(reverse-)forwarding, file synchronization, container log streaming, etc., which you can read about &lt;a href="https://devspace.sh/cli/docs/configuration/development/basics" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Wisely used in CI / CD pipelines, devspace can drastically simplify the way you release your software. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devspace</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
