<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ahmed Atef</title>
    <description>The latest articles on Forem by Ahmed Atef (@ahmedat71538826).</description>
    <link>https://forem.com/ahmedat71538826</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ahmedat71538826"/>
    <language>en</language>
    <item>
      <title>The Best Kubernetes Tutorials</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Mon, 30 Dec 2019 14:13:32 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/the-best-kubernetes-tutorials-380h</link>
      <guid>https://forem.com/ahmedat71538826/the-best-kubernetes-tutorials-380h</guid>
      <description>&lt;p&gt;We have been looking for the best Kubernetes tutorials out there and thought of sharing some of what we found interesting to get started with Kubernetes.&lt;/p&gt;

&lt;p&gt;The Official Kubernetes.io Tutorials&lt;br&gt;
It is more of a collection of the existing content on Kubernetes.io. It focuses more on introducing the general concepts and constructs of Kubernetes. But it doesn’t provide necessary lessons that build upon each other. Covered Topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Basics.&lt;/li&gt;
&lt;li&gt;Configuring Kubernetes.&lt;/li&gt;
&lt;li&gt;Stateless Applications.&lt;/li&gt;
&lt;li&gt;Stateful Applications.&lt;/li&gt;
&lt;li&gt;CI/CD Pipeline.&lt;/li&gt;
&lt;li&gt;Managing Kubernetes Clusters.&lt;/li&gt;
&lt;li&gt;Services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DigitalOcean Tutorials&lt;br&gt;
It is a collection of articles that are nicely written and well organized. They are sometimes focused on Running Kubernetes on top of DigitalOcean however. But you are still going to learn a lot of Kubernetes basics that are applicable to any other infrastructure. Some of the notable topics are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Introduction to Kubernetes&lt;/li&gt;
&lt;li&gt;An introduction to Kubernetes DNS Services&lt;/li&gt;
&lt;li&gt;An introduction to Helm, the package manager for Kubernetes&lt;/li&gt;
&lt;li&gt;Modernizing Applications for Kubernetes&lt;/li&gt;
&lt;li&gt;Building Optimized Containers for Kubernetes&lt;/li&gt;
&lt;li&gt;Kubernetes Networking Under the Hood&lt;/li&gt;
&lt;li&gt;Architecting Applications for Kubernetes&lt;/li&gt;
&lt;li&gt;Building Blocks for Doing CI/CD with Kubernetes&lt;/li&gt;
&lt;li&gt;How to Back up and restore a Kubernetes Cluster on DigitalOcean using Heptio Ark.&lt;/li&gt;
&lt;li&gt;How to Setup a Nginix Ingress with Cert-Manager on DigitalOcean Kubernetes&lt;/li&gt;
&lt;li&gt;How to Inspect Kubernetes Networking&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>aws</category>
      <category>gcp</category>
    </item>
    <item>
      <title>Defining Cloud-Native Apps (And Why You Should Care)</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Wed, 18 Dec 2019 15:02:33 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/defining-cloud-native-apps-and-why-you-should-care-36n</link>
      <guid>https://forem.com/ahmedat71538826/defining-cloud-native-apps-and-why-you-should-care-36n</guid>
      <description>&lt;p&gt;We all know what Cloud Computing means, But what about “native”?&lt;/p&gt;

&lt;p&gt;According to Merriam-Webster, “native” can be defined as “inborn, innate”. So, cloud-native apps can roughly be identified as software that was born in the cloud; applications that were designed from the very beginning to live on the cloud. But, this raises the expected question: what does a cloud-native application do differently to a traditional; non-cloud-native one? To answer this question, you need to be aware that running a traditional application on an infrastructure that you don’t own is a risky action.&lt;/p&gt;

&lt;p&gt;Why Running Non-Cloud-Native Applications On The Cloud is Risky?&lt;br&gt;
By not “owning” the infrastructure, we mean you don’t have access to the data-centers on which the machines are hosted, you cannot make decisions as to which hardware your application is physically using, whether or not there are hardware issues and how they are being managed, etc. The cloud provider does all the heavy lifting for you with a promise that your application will remain online even if an outage occurred on the provider's side. This promise is formally referred to as Service Level Agreement (SLA). With an SLA asserting a 99.95% availability, the provider guarantees that there’s only a 0.05% possibility that your application is down due to an outage on the cloud provider's side. Translating the percentage to an actual number reveals that you can expect your business to be offline for as much as 4 hours and 22 minutes per year.&lt;/p&gt;

&lt;p&gt;If your application is mission-critical, then the above may entail thousands of dollars in losses, harmed company reputation and, in extreme cases, lawsuits raised against you. Seems that 99.95% is not so relieving a percentage after all. It’s all the cloud provider's responsibility; they should take more measures to give you higher availability levels. Experiencing an unplanned downtime while your application is running due to cloud infrastructure issues is not your fault, Or is it?&lt;/p&gt;

&lt;p&gt;Let’s see how Netflix was able to survive a major outage that occurred on AWS (Amazon Web service), Netflix’s cloud provider. &lt;br&gt;
to learn more about Cloud_native Applications visit: &lt;a href="https://www.magalix.com/blog/defining-cloud-native-apps-and-why-you-should-care"&gt;https://www.magalix.com/blog/defining-cloud-native-apps-and-why-you-should-care&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>native</category>
      <category>docker</category>
    </item>
    <item>
      <title>Kubernetes Automatic Scaling</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Tue, 17 Dec 2019 13:38:31 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-automatic-scaling-1onj</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-automatic-scaling-1onj</guid>
      <description>&lt;p&gt;What is Scaling?&lt;br&gt;
Scaling means the practice of adapting your infrastructure to new load conditions. If you have more load, you scale up to enable the environment to respond swiftly/on-time and avoid node-crash. When things cool down and there isn’t much load, you scale down to optimize your costs. Scaling can be thought of in two ways:&lt;/p&gt;

&lt;p&gt;Vertical Scaling: this is when you increase your resources. For example, more memory, more CPU cores, faster disks, etc.&lt;br&gt;
Horizontal scaling: this is when you add more instances to the environment with the same hardware specs. For example, a web application can have two instances at normal times and four at busy ones.&lt;br&gt;
Notice that, depending on your scenario, you can use either or both of the approaches.&lt;/p&gt;

&lt;p&gt;However, sometimes the problem is when to scale. Traditionally, how much resources the cluster should have or how many nodes should be spawned were design-time decisions. The decisions were a result of lots of trial and error. Once the application is launched, a human operator would watch over the different metrics, particularly the CPU, to decide whether or not a scaling action is required. With the advent of cloud computing, scaling became as easy as a mouse click or a command. But still, it had to be done manually. Kubernetes is capable of automatically scaling up or down based on CPU utilization as well as other custom application metrics that you can define. In this article, we will discuss how you can optimize your application for autoscaling using the Horizontal Pod Autoscaling. Also how you can use Kubernetes on a cloud provider to increase the number of worker nodes if necessary.&lt;/p&gt;

&lt;p&gt;How Horizontal Pod Autoscaling (HPA) Works&lt;br&gt;
Controllers like Deployments and ReplicaSets allow you to have more than one replica for the Pods they are managing. This number can be managed automatically by the Horizontal controller. You enable the Horizontal controller through the HorizontalPodAutoscaler resource. Like other controllers, the HPA periodically scans the Pod metrics and the current number of replicas. If there’s a need for more Pods, it increases the number of replicas for the target controller (Deployment, ReplicaSet, or StatefulSet). Let’s discuss this operation in a little more detail.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>autoscaling</category>
    </item>
    <item>
      <title>Kubernetes 1.17 What's New?</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Sun, 08 Dec 2019 15:02:33 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-1-17-what-s-new-2p5j</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-1-17-what-s-new-2p5j</guid>
      <description>&lt;p&gt;The newest version of Kubernetes is about to get released. The question is what to expect from version 1.17. In this article, we have a brief overview of some of what Kubernetes 1.17 brings with it.&lt;/p&gt;

&lt;p&gt;Feature 1053: Structured Output For The Kubeadm Command&lt;br&gt;
The kubeadm tool is one of the ways you can set up a Kubernetes cluster on your own. Some higher-level tools may also use kubeadm behind the scenes like Terraform. Sometimes, those tools need to parse and process the output produced by the kubadm command. Any slight change to this output may break the chain. This feature allows kubeadm to generate structured output that can be consumed by other tools. For example, using kubeadm command -o json will produce the output in JSON format. This feature is in the alpha stage.&lt;/p&gt;

&lt;p&gt;Feature 382: Allow Nodes To Be Tainted On Condition&lt;br&gt;
This feature was already in Kubernetes since version 1.12. In this release, it finally graduates to stable stage. The feature basically allows the node controller to taint a node based on some predefined conditions that it observes. As usual, the user can opt to ignore those taints by adding the appropriate tolerations to the pods.&lt;/p&gt;

&lt;p&gt;Feature 548: The kube-Scheduler is Responsible For Scheduling DaemonSet Pods&lt;br&gt;
Another feature finding its way to stable. Like #382, this feature was already in Kubernetes since version 1.12 but in earlier stages of development. Through this change, DaemonSet pods are scheduled using the kube-scheduler just like other pods instead of being scheduled by the DaemonSet controller. The advantage of this is that DamonSets are treated the same way as other pods honoring pod priority and preemption.&lt;/p&gt;

&lt;p&gt;Feature 563: IPv6 Support&lt;br&gt;
Now you can assign IPv4 and IPv6 to the pods. This feature is in the apha release and is under heavy development so expect a lot of changes in this and upcoming releases.&lt;/p&gt;

&lt;p&gt;Feature 980: Ensure That Service LoadBalancers Are Deleted When Their Parent Services are&lt;br&gt;
By default, when a Service of type LoadBalancer is deleted, the underlying LoadBalancer resource should be deleted as well. However, in some cases, the LoadBalancer is not deleted after the Service is destroyed. This feature ensures that the LoadBalancer is removed when the Service is deleted. The deletion process will be blocked until the LoadBalancer is totally removed.&lt;/p&gt;

&lt;p&gt;Feature 177: Support For Volume Snapshots&lt;br&gt;
This feature has been in Kubernetes since 1.12. In this release, it is graduating to Beta. You can use VolumeSnapshot and VolumeSnapshotContent to create and use volume snapshots.&lt;/p&gt;

&lt;p&gt;In this article, we discussed some of the most notable features of the Kubernetes 1.17.&lt;br&gt;
&lt;a href="https://www.magalix.com/blog/kubernetes-1.17-whats-new"&gt;https://www.magalix.com/blog/kubernetes-1.17-whats-new&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>release</category>
    </item>
    <item>
      <title>Extending the Kubernetes Controller</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Thu, 05 Dec 2019 13:07:10 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/extending-the-kubernetes-controller-1dbj</link>
      <guid>https://forem.com/ahmedat71538826/extending-the-kubernetes-controller-1dbj</guid>
      <description>&lt;p&gt;Kubernetes Controllers Overview&lt;br&gt;
At the core of Kubernetes itself, there is a large set of controllers. A controller ensures that a specific resource is (and remains) at the desired state dictated by a declared definition. If a resource deviates from the desired state, the controller is triggered to do the necessary actions to get the resource state back to where it should be. But, how do controllers “know” that a change happened? For example, when you scale up a deployment, you actually send a request to the API server with the new desired configuration. The API server in return publishes the change to all its event subscribers (any component that listens for changes in the API server). Thus, the Deployment controller creates one or more Pod to conform to the new definition. A new Pod creation is, in itself, a new change that the API server also broadcasts to the event listeners. So, if there are any actions that should get triggered on new Pod creation, they kick in automatically. Notice that Kubernetes uses the declarative programming methodology, not the imperative one. This means that the API server only publishes the new definition. It does not instruct the controller or any event listener about how they should act. The implementation is left to the controller.&lt;/p&gt;

&lt;p&gt;While native Kubernetes controllers like Deployments, StatefulSets, Services, Job, etc. are enough on their own to handle most application needs, sometimes you want to implement your own custom controller. Kubernetes allows you to extend and build upon the existing functionality without having to break or change Kubernetes’s source code. In this article, we discuss how we can do this and the best practices.&lt;/p&gt;

&lt;p&gt;Custom Controller Or Operator?&lt;br&gt;
From the origin of Kubernetes, it was thought of controllers as the way developers can extend Kubernetes functionality by providing new behavior. Because of the many phases that extended-controllers has passed through, we can roughly classify them into two main categories:&lt;/p&gt;

&lt;p&gt;Custom Controllers: those are controllers that act upon the standard Kubernetes resources. They are used to enhance the platform and add new features.&lt;br&gt;
Operator: at their heart, they are custom controllers. However, instead of using the standard K8s resources, they act upon custom resource definitions (CRDs). Those are resources that were created specifically for the operator. Together, an operator and its CRD can handle complex business logic that a native or an extended controller cannot handle.&lt;br&gt;
The above classification is only used to differentiate between different concepts that you need to understand in each model. But in the end, the concept stays the same; we are extending Kubernetes by creating a new controller. In this article, we are interested in the first type; custom controllers.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>The ConfigMap Pattern</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Mon, 25 Nov 2019 14:23:20 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/the-configmap-pattern-3215</link>
      <guid>https://forem.com/ahmedat71538826/the-configmap-pattern-3215</guid>
      <description>&lt;p&gt;In one of our articles, we discussed the environment variable pattern; how to define them, where to use them and their potential drawbacks. We briefly touched the configMap and Secrets resources as a means of injecting external configuration settings to the Pod (and its running containers). In this article, we have a deeper demonstration of the configMap and Secret usage patterns and best practices.&lt;/p&gt;

&lt;p&gt;What’s The Problem With Using Environment Variables?&lt;br&gt;
There are several considerations that you must take into account when deciding to use environment variables for all your configuration needs:&lt;/p&gt;

&lt;p&gt;They are honored in multiple layers. For example, an environment variable that’s set in bash_profile of an image will be overridden if the same variable name was set in the Dockderfile. Even further, the same variable can be overridden in the Pod definition. Such behavior can cause hard-to-detect bugs when you’re not certain that your environment variable won’t get overridden by mistake.&lt;br&gt;
Environment variables cannot be changed once the application is launched. Modifying an environment variable requires restarting the container to apply the new data but depending on your deployment pattern, this may or may not be desired.&lt;br&gt;
More often than not, the external configuration is not limited to a bunch of variables, it spans throughout the whole configuration file. Think of php.ini, config.json, or package.json files. Those files need to be externally availed to the running container. The application expects to find its configuration file in a specific location rather than a set of environment variables.&lt;br&gt;
Using environment variables for storing sensitive data is a security risk.&lt;/p&gt;

&lt;p&gt;For more information: &lt;a href="https://www.magalix.com/blog/the-configmap-pattern"&gt;https://www.magalix.com/blog/the-configmap-pattern&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>pattern</category>
    </item>
    <item>
      <title>Kubernetes Service Catalog 101</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Thu, 21 Nov 2019 23:17:12 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-service-catalog-101-5g15</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-service-catalog-101-5g15</guid>
      <description>&lt;p&gt;What is a Service Catalog and why you may need to use it?&lt;br&gt;
As a Kubernetes user/operator, you’ve dealt with a lot of resources to provision different components of your infrastructure. You’ve used resources like Services, configMaps, Secrets. But sometimes you may need to use an external service like the ones typically offered by the cloud provider. Take AWS for example; they provide the RDS service, which is an abstraction layer that lets you gain access to a relational database (MySQL, Postgres,etc.) as a service. When you want to integrate RDS into your existing Kubernetes cluster, you need to be able to deal with it the same way you deal with any other Kubernetes resource. Take authentication as an example. If you want to give your cluster applications access to the RDS database, you will need to do a lot of manual work (and workarounds) to make things work as expected.&lt;/p&gt;

&lt;p&gt;To address this need, Kubernetes was extended to include the Kubernetes Service Catalog.&lt;/p&gt;

&lt;p&gt;What is Kubernetes Service Catalog?&lt;br&gt;
In a nutshell, the Kubernetes Service Catalog is an extension API that enables applications running inside the cluster to access applications and services provided by external sources, typically the cloud provider. Prominent examples of this pattern include provisioning databases, message queuing applications, object storage services, among others. Gaining systematic access to external resources is possible when the client consumes service brokers that implement the Open Service Broker API specification.&lt;br&gt;
&lt;a href="https://www.magalix.com/blog/kubernetes-service-catalog-101"&gt;https://www.magalix.com/blog/kubernetes-service-catalog-101&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Kubernetes RBAC 101</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Mon, 18 Nov 2019 12:58:33 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-rbac-101-5h5a</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-rbac-101-5h5a</guid>
      <description>&lt;p&gt;Role-Based Access Control (RBAC) Overview&lt;br&gt;
RBAC is a security design that restricts access to valuable resources based on the role the user holds, hence the name role-based. To understand the importance and the need of having RBAC policies in place, let’s consider a system that doesn’t use it. Let’s say that you have an HR management solution, but the only security access measure used is that users must authenticate themselves through a username and a password. Having provided their credentials, users gain full access to every module in the system (recruitment, training, staff performance, salaries, etc.). A slightly more secure system will differentiate between regular user access and “admin” access, with the latter providing potentially destructive privileges. For example, ordinary users cannot delete a module from the system, whereas an administrator can. But still, users without admin access can read and modify the module’s data regardless of whether their current job entails doing this.&lt;/p&gt;

&lt;p&gt;If you worked as a Linux administrator for any length of time, you appreciate the importance of having a security system that implements a security matrix of access and authority. In the old days of Linux and UNIX, you could either be a “normal” user with minimal access to the system resources, or you can have “root” access. Root access virtually gives you full control over the machine that you can accidentally bring the whole system down. Needless to say that if an intruder could gain access to this root account, your entire system is at high risk. Accordingly, RBAC systems were introduced.&lt;/p&gt;

&lt;p&gt;In a system that uses RBAC, there is minimal mention of the “superuser” or the administrator who has access to everything. Instead, there’s more reference to the access level, the role, and the privilege. Even administrators can be categorized based on their job requirements. So, backup administrators should have full access to the tools that they use to do full, incremental, and differential backups. But they shouldn’t be able to stop the webserver or change the system’s date and time.&lt;br&gt;
&lt;a href="https://www.magalix.com/blog/kubernetes-rbac-101"&gt;https://www.magalix.com/blog/kubernetes-rbac-101&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>rbac</category>
    </item>
    <item>
      <title>Kubernetes Secrets 101</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Wed, 13 Nov 2019 12:33:27 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-secrets-101-4h86</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-secrets-101-4h86</guid>
      <description>&lt;p&gt;What is a Kubernetes Secret?&lt;br&gt;
There are many times when a Kubernetes Pod needs to use sensitive data. Think for examples of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH keys.&lt;/li&gt;
&lt;li&gt;Database passwords.&lt;/li&gt;
&lt;li&gt;OAuth tokens.&lt;/li&gt;
&lt;li&gt;API keys.&lt;/li&gt;
&lt;li&gt;Image registry keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubernetes is designed to have a declarative syntax. Object definitions are stored in YAML (or JSON) files and - typically - placed under version control. Adding confidential information to a version-controlled file (that anyone can view) is against any security best practice. For that reason, Kubernetes includes Secrets.&lt;/p&gt;

&lt;p&gt;A Secret is just another Kubernetes object that stores restricted data so that it can be used without being revealed. Kubernetes users can create Secrets, and also the system itself establishes and uses Secrets.&lt;/p&gt;

&lt;p&gt;You can find Secrets referenced through a file attached to the pod through a volume. The kubelet also makes use of Secrets when it needs to pull an image from an Image Registry that requires authentication (for example, a private Docker Hub account, AWS ECR, or Google GCR). Additionally, Kubernetes makes use of Secrets internally to enable Pods to access and communicate with the apiserver component. The system automatically manages API tokens through Secrets attached to the Pods.&lt;br&gt;
&lt;a href="https://www.magalix.com/blog/kubernetes-secrets-101"&gt;https://www.magalix.com/blog/kubernetes-secrets-101&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>How to Save Up To 80% on Google Kubernetes Engine Using Magalix KubeAdvisor</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Sun, 03 Nov 2019 16:10:35 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/how-to-save-up-to-80-on-google-kubernetes-engine-using-magalix-kubeadvisor-2c79</link>
      <guid>https://forem.com/ahmedat71538826/how-to-save-up-to-80-on-google-kubernetes-engine-using-magalix-kubeadvisor-2c79</guid>
      <description>&lt;p&gt;Google Cloud Platform (GCP) recently announced the beta launch of Google Cloud Recommenders, and the features and functionality are pretty exciting. The short of it is that with Recommenders in GCP, you can now:&lt;/p&gt;

&lt;p&gt;Automatically get analysis of usage patterns to help you determine if resources and policies within Google Cloud are optimally configured&lt;br&gt;
Automatically detect if overly permissive access policies are present, and adjust them based on access patterns of similar users in your organization&lt;br&gt;
Choose the optimal virtual machine size for your workload, because most GCP customers initially provision machines that are too small or too large.&lt;br&gt;
This is all pretty awesome, but in actuality, this is old news.&lt;/p&gt;

&lt;p&gt;The Magalix Agent (which can be found in the Google Cloud Marketplace) has been doing this since day 1. Just last week with the release of KubeAdvisor, we now enable developers and DevOps engineers to get continuously generated Recommendations specifically to save money on any cloud provider (including GCP), enact best practices for cluster configuration, and optimize resource usage with application performance.&lt;/p&gt;

&lt;p&gt;KubeAdvisor helps you select the right VM type, the right capacity limits and resources, and can even suggest how to optimize your K8s clusters based on the billing model of your cloud provider.&lt;/p&gt;

&lt;p&gt;Just like Google Cloud, we’ve found that Kubernetes users are often way over or under-provisioned, and can save as much as 80% on their cloud costs by using KubeAdvisor. In this article, we’ll learn how KubeAdvisor works, and how it can help save on cloud costs while optimizing for app performance on Kubernetes.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Patterns : The Ambassador Pattern</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Thu, 31 Oct 2019 12:52:55 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-patterns-the-ambassador-pattern-46n6</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-patterns-the-ambassador-pattern-46n6</guid>
      <description>&lt;p&gt;What is an Ambassador container?&lt;br&gt;
An Ambassador container is a sidecar container that is in charge of proxying connections from the application container to other services. However, while the Adapter container acts as a reverse proxy, the Ambassador container acts as a client proxy. You might be wondering, why do we need to proxy the application connection requests? Because we need to follow the separation of concerns principle. Each container should do it’s task and do it well. If there are other tasks that requires the application’s function in order to work correctly, we may hand those tasks to the sidecar container.&lt;/p&gt;

&lt;p&gt;For example, almost all applications need a database connection at some phase. In a multi-environment place, there would be a test database, a staging database, and a production database. When writing the Pod definition for their application’s container, developers must pay attention to which database they’ll be connecting to. A database connection string can be easily changed through an environment variable or a configMap. We could also use a sidecar pattern that proxies DB connections to the appropriate server depending on where it runs. Developers needn’t change the connection string, they could leave the DB server at localhost as usual. When deployed to a different environment, the Ambassador container detects which environment it is running on (possibly through the Reflection pattern), and connects to the correct server.&lt;/p&gt;

&lt;p&gt;Another well-known use case for the Ambassador container is when your application needs to connect to a caching server like Memcached or Redis. Let’s have a Redis example scenario to demonstrate this pattern.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>patterns</category>
    </item>
    <item>
      <title>Kubernetes Patterns : The Reflection Pattern</title>
      <dc:creator>Ahmed Atef</dc:creator>
      <pubDate>Wed, 23 Oct 2019 13:41:35 +0000</pubDate>
      <link>https://forem.com/ahmedat71538826/kubernetes-patterns-the-reflection-pattern-5dc4</link>
      <guid>https://forem.com/ahmedat71538826/kubernetes-patterns-the-reflection-pattern-5dc4</guid>
      <description>&lt;p&gt;What is “Reflection”?&lt;br&gt;
Reflection is a concept that is available in most (if not all) programming languages. It simply refers to the ability of an object of some type to reveal important information about itself. For example, its name, its parent class, and any metadata that it happens to contain. In Cloud and DevOps arenas, the same concept holds. For example, if you are logged into an AWS EC2 instance, you can easily get a wealth of information about that particular instance (its reflection) by issuing a GET request to &lt;a href="http://169.254.169.254/latest/meta-data/"&gt;http://169.254.169.254/latest/meta-data/&lt;/a&gt; from within the instance itself.&lt;/p&gt;

&lt;p&gt;Why Do We Need an Object’s Reflection?&lt;br&gt;
An object here is used as a generic term to refer to the unit of work. So, in a programming language, an object is an instance of a class, in your on-prem infrastructure, the object may be a physical or virtual host, in a cloud environment, it is the instance, and in Kubernetes, it’s the Pod.&lt;/p&gt;

&lt;p&gt;In this article, we are interested in Kubernetes, so Pod and object may be used interchangeably.&lt;/p&gt;

&lt;p&gt;There are many use cases where you need the metadata of a Pod, especially if that Pod is part of a stateless application where Pods are dynamic by nature. Let’s see some possible scenarios:&lt;/p&gt;

&lt;p&gt;You need the IP address of the Pod to identify whether or not it was the source of suspicious traffic that was detected on your network.&lt;br&gt;
The application running inside the container needs to know the namespace in which the Pod is running, perhaps because it is programmed to behave differently depending on the environment where it is running, conveyed by the namespace.&lt;br&gt;
You need to know the current resource limit (CPU and memory) imposed on the container. You can further use this data to automatically adjust the heap size of a Java application when it starts, for example.&lt;/p&gt;

&lt;p&gt;For More information visit: &lt;a href="https://www.magalix.com/blog/kubernetes-patterns-the-reflection-pattern"&gt;https://www.magalix.com/blog/kubernetes-patterns-the-reflection-pattern&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>patterns</category>
    </item>
  </channel>
</rss>
