<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ATIXAG</title>
    <description>The latest articles on Forem by ATIXAG (@atixag).</description>
    <link>https://forem.com/atixag</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/atixag"/>
    <language>en</language>
    <item>
      <title>Ansible automation with AWX: an overview and how to get started</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Fri, 10 Oct 2025 08:59:10 +0000</pubDate>
      <link>https://forem.com/atixag/ansible-automation-with-awx-an-overview-and-how-to-get-started-3m7n</link>
      <guid>https://forem.com/atixag/ansible-automation-with-awx-an-overview-and-how-to-get-started-3m7n</guid>
      <description>&lt;p&gt;&lt;strong&gt;Automation is a key pillar of modern IT strategies. AWX extends Ansible with an open-source platform that makes automation more efficient and transparent through an intuitive web interface, job scheduling, and role-based access control.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWX?
&lt;/h2&gt;

&lt;p&gt;AWX is a powerful automation platform that centralizes the planning, execution and management of Ansible automation tasks. Compared to the pure use of Ansible on the command line, AWX offers numerous additional functions such as a modern web user interface, job scheduling, Role-Based Access Control (RBAC), central management of credentials, support for containerized execution environments, central logging and options for high availability in productive use.&lt;/p&gt;

&lt;p&gt;It serves as the upstream open-source project of Red Hat’s Ansible Automation Platform, making it a great choice for teams looking for a project open source alternative that’s flexible and community-driven.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why choose AWX for automation?
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Who benefits from AWX?
&lt;/h2&gt;

&lt;p&gt;AWX is ideal for organizations where multiple users need to collaborate and share Ansible code in a structured and secure way. It’s especially useful for teams that prefer working with a graphical dashboard rather than relying solely on the command line, making it accessible to users who may not be fully confident with CLI tools. The centralized interface also provides valuable features like job scheduling, log storage, and Role-Based Access Control, which streamline automation tasks and improve visibility. However, for a single user who is already highly proficient with the Ansible CLI, AWX may offer limited added value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source und Community-Support
&lt;/h2&gt;

&lt;p&gt;AWX is an open-source project, which means it is not only free to use but also constantly evolving, often at the bleeding edge of automation technology. Being open source gives users the unique opportunity to directly influence the direction of the project, whether through contributing code, reporting issues, or participating in discussions within the community. This collaborative environment ensures that AWX continues to meet the needs of a diverse user base, with ongoing support and valuable insights from its global community. However, it’s important to note that AWX is currently undergoing a significant code refactoring project, which has led to a temporary pause in new releases. While this may slow down new feature availability in the short term, the refactor is aimed at making AWX more scalable and maintainable for future improvements, so the long-term benefits will be substantial. Engaging with the community during this process can provide an opportunity to shape the future of the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of AWX for Enterprise Automation Teams
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Comfortable user interface
&lt;/h2&gt;

&lt;p&gt;AWX comes with an intuitive web interface. It provides a central dashboard with a summary of failed and successful jobs such that users can get an overview of the current health of their configuration management at a glance. For more accurate analysis, the web interface also stores the log outputs of all past jobs and can thus serve as a powerful configuration auditing tool. These features support automation workflows across teams and improve traceability.&lt;/p&gt;

&lt;p&gt;Compared to Ansible from the command line, AWX requires no technical knowledge to use. The web UI’s intuitiveness allows to decouple operation from the development of Ansible code: one team can develop the Ansible code and another one can execute it comfortably within AWX.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schedules, Workflows and Surveys
&lt;/h2&gt;

&lt;p&gt;In AWX, features like schedules, workflows, and surveys provide a more user-friendly and streamlined approach compared to the command-line interface (CLI). While you could schedule jobs from the CLI via cron, the AWX UI offers a more intuitive method for setting up scheduled jobs, making it easier to automate tasks. Workflows in AWX enable users to concatenate multiple jobs into a sequence, providing greater flexibility and control over job execution. Administrators can insert approval steps within the workflow, ensuring that critical tasks are reviewed before proceeding, or add remediation steps to automatically address failures. Additionally, surveys in AWX allow users to gather input before running a job, ensuring the job is tailored to specific needs and reducing errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  High availability
&lt;/h2&gt;

&lt;p&gt;AWX offers a high-availability (HA) architecture that can be tailored to your specific needs. It must be deployed on a Kubernetes cluster. This setup simplifies tasks like backups, restores, and scaling, providing flexibility in managing resources as demand grows. For users who don’t require full HA, a single-node installation is a straightforward option, offering a simpler, less resource-intensive deployment. However, for more robust high-availability setups, AWX can be deployed across a production-ready Kubernetes cluster, with execution nodes distributed across different segments to ensure fault tolerance and reliability. The topology can be adjusted to match the desired level of availability, whether it is for smaller environments or large-scale enterprise use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Role-based access control
&lt;/h2&gt;

&lt;p&gt;Role-Based Access Control (RBAC) in AWX is designed to ensure that users have access only to the resources they need, enhancing security and minimizing the risk of unauthorized actions. The structure is organized around organizations, teams, and users, with each user being assigned specific roles that dictate their level of access. RBAC in AWX is highly granular, allowing for precise control over permissions such as read, write, and use access for various resources within the platform. This means you can define roles with specific capabilities, ensuring that users can only interact with resources that are relevant to their tasks. Additionally, AWX supports integration with LDAP, making it easier to manage and synchronize user roles and permissions with existing enterprise directory services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credentials management
&lt;/h2&gt;

&lt;p&gt;AWX offers built-in credentials management, allowing you to securely store and manage sensitive information such as API keys, passwords, and certificates without relying on external tools like ansible-vault. Furthermore, AWX also supports integration with external secret management systems, such as HashiCorp Vault, providing flexibility for organizations with existing secret management practices. Role-Based Access Control (RBAC) extends to credentials as well, ensuring that only authorized users can access sensitive data. It is worth noting, though, that users with write and execute permissions on playbooks can always read the credentials, which is an important consideration when designing security policies. This makes AWX a powerful tool for securely managing credentials while providing flexibility for a range of use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get started with AWX in your organization
&lt;/h2&gt;

&lt;p&gt;Introducing AWX into an organization starts with a solid foundation in Ansible itself, as fundamental knowledge of Ansible playbooks, roles, and modules is crucial for effectively leveraging AWX. To ensure smooth integration, it is essential that Ansible code is well-structured and stored in Git, making it easy to version, collaborate on, and scale as the automation needs grow. A great way to begin is by setting up a single-node AWX instance, which allows you to familiarize yourself with the platform before transitioning to a full-blown cluster as your requirements expand. This phased approach helps organizations evaluate AWX as a scheduling automation platform and align its use with long-term DevOps objectives. Additionally, understanding the users and roles concept in AWX is critical for effective Role-Based Access Control and ensuring that only the right people have access to the right resources. To support the successful deployment and adoption of AWX, we offer trainings and workshops that can guide your team through the process, helping them get up to speed with the platform and make the most of its powerful automation capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AWX Fits Into Modern IT Automation Strategies
&lt;/h2&gt;

&lt;p&gt;As organizations scale their digital operations, IT automation has become essential to maintaining agility, reducing human error, and improving time-to-value. AWX fits seamlessly into this shift by providing a flexible, open-source automation platform that supports collaborative, repeatable infrastructure management.&lt;/p&gt;

&lt;p&gt;By building upon Ansible, AWX enables teams to visualize and manage automation tasks through an intuitive web interface, bridging the gap between traditional DevOps engineers and IT operations teams. With features like job scheduling, credential management, RBAC, and support for high availability, AWX helps organizations standardize workflows, ensure compliance, and accelerate deployment cycles.&lt;/p&gt;

&lt;p&gt;AWX also aligns with modern CI/CD pipelines and hybrid cloud strategies. Its container-native architecture allows it to run effectively on Kubernetes, making it ideal for cloud-native environments. For organizations using GitOps workflows, AWX’s integration with source control and project-based configurations makes it a powerful tool for managing infrastructure as code across teams and environments.&lt;/p&gt;

&lt;p&gt;In short, AWX empowers businesses to scale their automation strategies without locking into proprietary platforms—making it a cost-effective, community-driven, and future-ready solution for modern enterprise IT.&lt;/p&gt;

</description>
      <category>awx</category>
      <category>ansible</category>
      <category>beginners</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Managing Large Debian Repositories with Pulp</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Fri, 22 Nov 2024 07:46:39 +0000</pubDate>
      <link>https://forem.com/atixag/managing-large-debian-repositories-with-pulp-1jmo</link>
      <guid>https://forem.com/atixag/managing-large-debian-repositories-with-pulp-1jmo</guid>
      <description>&lt;p&gt;Pulp is a free, open-source platform for software repository management. You can fetch, upload, and distribute content from various sources. Repository versioning makes sure that nothing is lost as you can always roll back to previous versions. The pulp_deb plugin adds APT repository support.&lt;/p&gt;

&lt;p&gt;There is such a thing as Pulp Debian support, and it has been around for a while. It was expanded by ATIX for use with Katello a few years ago. It works great for small to medium-sized repositories. However, performance is not ideal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge
&lt;/h2&gt;

&lt;p&gt;Around 2019, ATIX consultants wanted to synchronize all of Debian Stretch and Ubuntu Xenial for a demo. Unfortunately, they found that it generally takes about five hours, only to fail with a “Cannot allocate memory” error. What was going on?&lt;/p&gt;

&lt;p&gt;To answer this question, they needed to take a closer look at the pulp_deb implementation. Code is organized into several steps. The implementation relies heavily on the python-debpkgr dependency, which in turn relies on deb822 from the python-debian library. python-debpkgr is mainly designed to take a pile of Debian packages and organize them into an APT repository. The structure of Debian repositories looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/dists/ stretch / Release
/dists/ stretch /main/binary -amd64/ Packages
/dists/ stretch / contrib /binary -amd64/ Packages
/dists/ stretch /non -free/binary -amd64/ Packages
/pool/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During a sync, we have the “MetadataStep,” which is provided with a list of releases, components, and packages (with meta data) from the Mongo DB. It then applies a logic: for every combination of architecture, component, and release, a list of packages is generated. These lists contain the paths to the actual .deb package files on the disk. Finally, each list is passed to a debpkgr call as an argument.&lt;/p&gt;

&lt;p&gt;debpkgr is mainly designed to take a pile of Debian packages and turn them into a repo. So, it does just that: Each .deb file is accessed on the disk to extract the meta data debpkgr needs. Due to the way the package lists overlap for different architectures, many of these .deb files will actually be parsed multiple times.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;Our experts’ first thought was: maybe there’s a quick-and-dirty fix? However, they also considered a complete redesign of the way debpkgr works. Another alternative might be dropping debpkgr (from the MetadataStep) and implementing everything themselves.&lt;/p&gt;

&lt;p&gt;The basic idea was to exclusively use information from the Mongo DB to create the repository structure. The old implementation already had to parse the meta data from the Mongo DB in order to generate the lists that were then passed to debpkgr. This essentially remained unchanged. Our experts had to create the desired directory structure themselves. They also had to build the symlinks to the actual .deb files themselves. They then needed the ability to write Packages and Release files. As one always does, they happened upon a few stumbling blocks:&lt;/p&gt;

&lt;p&gt;debpkgr generates md5sum, sha1, and sha256 for metadata. The existing data base model only stored sha256 hashes. Actually using the meta data from the data base revealed a bug. User-defined meta data fields/fields were not stored in the existing data base model.&lt;/p&gt;

&lt;p&gt;Our consultants came up with the following results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Two major pull requests:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1.Ensure the db is used consistently by quba42 · Pull Request #61 · pulp/pulp_deb&lt;/p&gt;

&lt;p&gt;2.MetadataStep performance by quba42 · Pull Request #57 · pulp/pulp_deb&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An end to our memory problems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Syncs for medium-sized repositories (1500 packages) that are more than twice as fast&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Syncing Ubuntu Xenial (main, restricted, universe, multiverse) for amd64 (53837 Packages) within 3h36m on the test system&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What did everyone learn? It is important to know your tools! Furthermore, you have to take your time to plan the architecture and gain the required domain knowledge.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>debian</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
    <item>
      <title>The future of software architecture: focus on event-driven architecture</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Wed, 20 Nov 2024 08:17:39 +0000</pubDate>
      <link>https://forem.com/atixag/the-future-of-software-architecture-focus-on-event-driven-architecture-5amf</link>
      <guid>https://forem.com/atixag/the-future-of-software-architecture-focus-on-event-driven-architecture-5amf</guid>
      <description>&lt;p&gt;Software architecture plays a decisive role in the design of modern applications. Event-Driven Architecture (EDA) is a promising approach. This article highlights the advantages and functionality of EDA. You will also get a closer look at how it reacts to events in real time and why it is an attractive option for developing flexible, scalable and responsive systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The importance of software architecture
&lt;/h2&gt;

&lt;p&gt;The aim of this approach is to be able to react to events or changes in the system in real time. This also gives it a significant performance advantage over conventional REST APIs.&lt;/p&gt;

&lt;p&gt;The architecture on which an application is based plays a decisive role in the design of flexible and scalable systems. One of the most modern and promising architectures is Event-Driven Architecture, EDA. The aim of this approach is to be able to react to events or changes in the system in real time. This also provides a significant performance bonus compared to conventional REST APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an event in the context of event-driven architecture?
&lt;/h2&gt;

&lt;p&gt;An event describes a specific action or a change to the state of the system. This can be, for example, the creation of a new user, the completion of a transaction or the updating of an inventory. These events are recorded and transported to the respective target in real time. The target component reacts to the incoming event, for example by updating a database or sending a notification. It can therefore be said that events are the central triggers for actions and interactions in the EDA.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does event-driven architecture mean?
&lt;/h2&gt;

&lt;p&gt;EDA is a software design paradigm that focuses on the capture and processing of events. In contrast to traditional, process-driven architectures, in which the process flow is defined by fixed&lt;/p&gt;

&lt;p&gt;procedures and processes, EDA is based on the occurrence of and reaction to events. This architecture enables systems to react to changes in real time and respond flexibly to new requirements.&lt;/p&gt;

&lt;p&gt;In principle, events are published by producers in a broker in EDA. One or more consumers can then react to these events. In the simplest case, with only one producer and consumer communicating directly with each other, it would not be necessary to use a broker. However, most companies have several producers that produce events. Therefore, brokers or broker networks (event meshes) make perfect sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Event-Driven Architecture work?
&lt;/h2&gt;

&lt;p&gt;EDA aims to improve the responsiveness and flexibility of systems through the use of event-driven mechanisms.&lt;/p&gt;

&lt;p&gt;The core idea behind EDA is to create loosely coupled systems in which different components can work independently of each other and still be informed about relevant events. Events act as a central means of communication between the different parts of the system.&lt;/p&gt;

&lt;p&gt;The functionality of EDA is based on several key elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Event: These represent relevant actions or state changes in the system, such as adding a product to the shopping cart in an e-commerce system or triggering a payment transaction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Producer: Components or services in the system that generate and publish events are referred to as producers. They identify relevant events and send them to the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consumer: These components listen to specific events and carry out corresponding actions. They can perform tasks ranging from updating databases and triggering workflows to notifying users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Broker: A central component of EDA is the broker. It serves as an intermediary between producer and consumer. The broker receives events, routes them to interested recipients and ensures the reliability of message transmission.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By implementing EDA, systems can be made more agile and scalable as they allow components to be developed, changed and scaled independently of each other. This promotes flexibility, increases the reusability of components and enables a faster response to business requirements and events in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7vsutk1wxwxswvslrb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7vsutk1wxwxswvslrb7.png" alt=" " width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Design patterns&lt;br&gt;
When implementing EDA, there are several proven design patterns that can be applied depending on the requirements and complexity of the system. These patterns help to design the interaction between the different components of a system and to take full advantage of EDA:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Publish-Subscribe Pub/Sub): This pattern enables a loose coupling between producer and consumer. Producers publish events via a central intermediary (the broker), which distributes the messages to all registered consumers. Consumers can react selectively to events that are relevant to them without being directly dependent on the producers. This achieves scalability and flexibility in the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Event Sourcing: In this pattern, the state of a system is represented as a sequence of events that are published and saved. Instead of storing the current state directly, systems can reconstruct the state by retrieving all relevant events and applying them in sequence. This makes it possible to track the progress of data changes, make revisions and ensure reliable auditing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These patterns are just a few examples of the variety of approaches that can be used when implementing EDA. The choice of the appropriate pattern depends on factors such as the system requirements, the complexity of the business logic and the performance objectives. By applying these patterns in a targeted manner, EDA can be used effectively to create flexible, scalable and responsive systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of an event-driven architecture
&lt;/h2&gt;

&lt;p&gt;EDA is becoming increasingly important, especially in the development of modern, distributed systems. By utilizing event-driven mechanisms, EDA can offer many advantages that make it a preferred choice for many software developers and companies. Here are five key benefits of EDA Loose coupling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Loose coupling: EDA allows system components to be loosely coupled. This means that each component can be developed, deployed and scaled independently. Changes to one component do not require adjustments to others, which makes it much easier to maintain and expand the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: The asynchronous communication in EDA systems enables easy scalability of the components. As the components work independently of each other, they can be scaled individually as loads increase without affecting the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Responsiveness: EDA systems can respond to events in real time, enabling rapid adjustments to changing conditions. This real-time responsiveness capability significantly improves the user experience as users receive immediate feedback on their actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexibility: The architecture of EDA Systemen makes it easy to add new functions and services. By integrating new producers or consumers, the system can be easily expanded without affecting existing components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fault isolation: Faults in one component of an EDA system do not affect the entire system. This isolation improves the reliability and availability of the system as problems can be quickly identified and rectified without disrupting other parts of the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, EDA provides a robust and flexible basis for the development of modern software systems. Its loose coupling, high scalability, real-time responsiveness, flexibility and fault isolation allow companies to develop systems that not only meet current requirements but can also be easily adapted to future challenges. These advantages make EDA an extremely attractive option for the development of scalable and reliable applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;EDA is a powerful paradigm that is revolutionizing modern software development. By making systems flexible, scalable and responsive, EDA enables rapid adaptation to change and increased efficiency. The loose coupling of components and real-time responsiveness offer significant advantages for companies that depend on agility and reliability. EDA is therefore not just an architectural choice, but a strategic approach to developing future-proof and resilient applications.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>Scaling Applications to Zero with Kubernetes and KEDA</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Mon, 18 Nov 2024 15:11:42 +0000</pubDate>
      <link>https://forem.com/atixag/scaling-applications-to-zero-with-kubernetes-and-keda-28ab</link>
      <guid>https://forem.com/atixag/scaling-applications-to-zero-with-kubernetes-and-keda-28ab</guid>
      <description>&lt;p&gt;For cost reasons, it is often neither feasible nor desirable to assign enough resources to a deployment for it to be able to handle peak loads at all times. Therefore, we typically scale applications up and down based on the load they are currently facing. This usually involves a minimum number of instances deployed at any time, even if there is no load. This minimum can force us to keep more worker nodes in our Kubernetes cluster than necessary as the instances have an assigned resource budget.&lt;/p&gt;

&lt;p&gt;In this blog post, we will take a look at how to reduce the minimum amount of deployed instances to zero and discuss which kinds of applications benefit from that the most.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;When you scale applications to zero, you have to bear a few things in mind. First, as the inbuilt “HorizontalPodAutoscaler” can only scale to a minimum of one replica, another scaler is required. This adds a bit of overhead to the cluster. In this post, we will focus on using the Kubernetes Event-Driven Autoscaler (KEDA) for that purpose.&lt;/p&gt;

&lt;p&gt;Second, we often assume that there is at least one replica when we design our systems. This means that, typically, certain metrics are not collected when there are no instances. We need a metric which is available to the autoscaler while the deployment is scaled to zero, otherwise we will not be able to scale back up. Also, how are incoming connections going to be handled when there is no instance running?&lt;/p&gt;

&lt;p&gt;For some cases, this is easy to solve. Applications that follow a producer-consumer pattern make it easy to scale the consumer to zero. For example, when an application is consuming messages from a message queue, we are able to take the current length of the queue as a metric for our scaling. If the queue is empty, there is no need to have a worker consuming the queue, and as soon as there is at least one entry in the queue, we can deploy a consumer.&lt;/p&gt;

&lt;p&gt;Other cases become a little harder to solve. With a web application, we don’t have a good metric available when the application is scaled to zero. Usually, we would want to base the scaling on the number of requests in a given time frame or the average response time.&lt;br&gt;
We could use a metric from an external source if available or implement another component to monitor incoming requests.&lt;/p&gt;

&lt;p&gt;The need for such a component becomes even more apparent when we consider the requests itself. We don’t want to miss requests when the application is currently scaled to zero. Therefore, we have to keep track of incoming connections and deploy our application in case it is currently scaled to zero replicas when a request appears.&lt;/p&gt;

&lt;p&gt;The KEDA HTTP add-on is a component of KEDA that allows us to base scaling on the number of incoming HTTP requests. It contains a proxy that is inserted between ingress controller and application in Kubernetes. This proxy buffers incoming requests and reports the number of pending requests as a metric to KEDA.&lt;/p&gt;

&lt;p&gt;An example configuration for KEDA with the HTTP add-on may look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: http.keda.sh/v1alpha1
kind: HTTPScaledObject
metadata:
  name: frontend-scaler
  namespace: demoapplication
  labels:
    deploymentName: frontend
spec:
  host: "demo.example.com"
  scaleTargetRef:
    deployment: demoapplication
    service: frontend
    port: 8080
  targetPendingRequests: 10
  replicas:
    min: 0
    max: 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration will scale the frontend deployment to zero when there are no pending requests, and scale up to a maximum of 100 instances if required to keep the queue at 10 pending requests. While this works well for applications with a gradually increasing load, you might run into difficulties when applying this to applications with a sharp increase in requests. In this case, it is possible that the proxy turns into a bottleneck.&lt;/p&gt;

&lt;p&gt;Another class of applications that is easy to scale to zero are those that are only used during certain time periods. For example, a learning platform used by a school has a lot more traffic during daytime on days that are not holidays than at night. Furthermore, such applications are known to have a nonnegligible start-up time, which could lead to request time-outs when scaling up from zero on request.&lt;br&gt;
The time dependency can be used to preemptively scale up the application shortly before a lot of traffic is expected, preventing users from running into an unreachable application during normal hours. At night, the application could be scaled to zero, and be scaled up using the proxy if demand arises. This can lead to a time-out during the first request, but the savings may be worth it, especially if this platform was hosted by a service provider for multiple schools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Of course, scaling to zero is not always a viable option as some applications don’t benefit from it. This includes applications with a constant base load or only short time frames without traffic. In these cases, the overhead of scaling from one to zero and back again might easily outweigh the benefits of scaling to zero.&lt;/p&gt;

&lt;p&gt;In short, most applications running on Kubernetes can be scaled to zero with a bit of effort, which can reduce both your infrastructure bill and your carbon footprint.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>aws</category>
      <category>discuss</category>
    </item>
    <item>
      <title>DevOps vs. Platform Engineering: Another Trend or the Next Big Thing?</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Thu, 14 Nov 2024 14:40:44 +0000</pubDate>
      <link>https://forem.com/atixag/devops-vs-platform-engineering-another-trend-or-the-next-big-thing-592n</link>
      <guid>https://forem.com/atixag/devops-vs-platform-engineering-another-trend-or-the-next-big-thing-592n</guid>
      <description>&lt;p&gt;Platform engineering is not just seen as the next trend “after” DevOps. Rather, it is the development or deployment of internal platforms that can be used by teams to improve the efficiency of application development and deployment. Developers should be excited about this because they are at the center of technical change and their needs are at the center of decisions. Project managers and decision makers should also be intrigued by this new approach, which seems to have found a way to further increase productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  So… DevOps is dead?
&lt;/h2&gt;

&lt;p&gt;Does this mean the DevOps issue is over and we will find happiness in platform engineering? The answer is clear and unequivocal: “It depends…”. Just as DevOps is not dead and probably never will be, platform engineering not only brings great opportunities and a desirable developer experience but also the risk of distracting people and organizations from their actual goals. As we have already seen with Kubernetes, it can be common practice to combat organizational or procedural problems or inadequacies in code and infrastructure with a new tool. &lt;/p&gt;

&lt;p&gt;Anyone who sees DevOps as a trend or tool has probably really buried DevOps. The majority understood that DevOps means much more than automation, measuring DORA metrics, or lean thinking. The biggest gain that can be achieved from DevOps is undoubtedly cultural change. In addition to improving communication within and between teams, this is particularly about dealing with errors, which no longer need to be hidden but are seen as potential for improvement. If this culture is really established and lived in companies, then there will be no turning back and DevOps will stay where it is: in the head, and not in processes or buzzword presentations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furvhg0yg7vfb0uhlq0eq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furvhg0yg7vfb0uhlq0eq.jpg" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So… Platform Engineering = The next big thing?
&lt;/h2&gt;

&lt;p&gt;The Cloud Native Computing Foundation is now home to some great projects on the subject of platform engineering, such as Backstage and Crossplane, which are very much driven by the current trend but naturally offer enormous potential for modern IT infrastructure and the developer experience. Here too, as with DevOps, there are many opportunities to increase your own capacities and benefit. And here too, just as with DevOps, there are just as many options for making life more difficult for yourself and your teams than it was before. Before platform engineering is introduced, the question should first be answered: why? Are we again throwing a technology at a problem that we need to solve with culture or communication? Are we trying to jump on the next trend to reduce costs? Then perhaps platform engineering is not the solution. But do we want to offer a platform that our developers can use to reduce their overhead on new projects, for example? Do we want to provide automated test and development environments for this? Then platform engineering is probably part of the solution. Finally, we recommend the blog post by Giulio Roggero, CTO at Mia-Platform, on this topic. He is already proposing a new paradigm, which he has christened Platform Engineering ++.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9vxf0rsba6hyvxjwqmy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9vxf0rsba6hyvxjwqmy.jpg" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Navigating the XZ Utils Security Vulnerability: A Comprehensive Guide</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Mon, 15 Apr 2024 12:08:33 +0000</pubDate>
      <link>https://forem.com/atixag/navigating-the-xz-utils-security-vulnerabilitya-comprehensive-guide-438f</link>
      <guid>https://forem.com/atixag/navigating-the-xz-utils-security-vulnerabilitya-comprehensive-guide-438f</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of cybersecurity, the recent discovery of a critical vulnerability in XZ Utils, a widely used data compression software, underscores the need for vigilant security practices.&lt;/p&gt;

&lt;p&gt;Identified as CVE-2024-3094, this backdoor vulnerability, discovered on March 28, 2024, has sent ripples through the open-source community and beyond, affecting various Linux distributions and necessitating immediate action to safeguard systems against potential exploits​ (&lt;a href="http://techcommunity.microsoft.com/" rel="noopener noreferrer"&gt;Microsoft Community Hub&lt;/a&gt;) (&lt;a href="https://unit42.paloaltonetworks.com/threat-brief-xz-utils-cve-2024-3094/" rel="noopener noreferrer"&gt;Unit 42&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  orcharhino for Efficient Patch Management
&lt;/h2&gt;

&lt;p&gt;In this context, &lt;a href="https://orcharhino.com/en/" rel="noopener noreferrer"&gt;orcharhino&lt;/a&gt; emerges as a valuable tool for organizations seeking to navigate the challenges of handling security patches in an agile way. With its superior patching capabilities, orcharhino enables users to perform ad-hoc updates, ensuring that patches are installed promptly to mitigate the risk associated with this critical vulnerability. The agile response from the open-source community, in providing timely updates, further reinforces the resilience and collaborative spirit inherent within the ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding CVE-2024-3094
&lt;/h2&gt;

&lt;p&gt;CVE-2024-3094 is the result of a sophisticated software supply chain compromise, impacting versions 5.6.0 and 5.6.1 of XZ Utils. Assigned a critical CVSS score of 10, it highlights the severity of the threat posed by this backdoor vulnerability, capable of compromising system integrity through remote unprivileged systems connecting to SSH ports​​.&lt;/p&gt;

&lt;h2&gt;
  
  
  Affected Systems and Mitigation Strategies
&lt;/h2&gt;

&lt;p&gt;The vulnerability affects several key Linux distributions, including Fedora Rawhide, Fedora 41, Debian’s testing, unstable and experimental distributions, openSUSE Tumbleweed, and Kali Linux, among others. Notably, Debian stable versions remain unaffected, as well as Red Hat Enterprise Linux, Oracle Unbreakable Linux, and SUSE Linux Enterprise Server, showcasing the nuanced impact across different environments.&lt;/p&gt;

&lt;p&gt;To mitigate the risks associated with CVE-2024-3094, the Cybersecurity and Infrastructure Security Agency (&lt;a href="https://www.cisa.gov/news-events/alerts/2024/03/29/reported-supply-chain-compromise-affecting-xz-utils-data-compression-library-cve-2024-3094" rel="noopener noreferrer"&gt;CISA&lt;/a&gt;) and distribution maintainers have urged users and developers to downgrade to a previous, uncompromised version of XZ Utils, specifically recommending version 5.4.6 where possible​. Additionally, various Linux distributions and package maintainers have promptly responded with updates and guidance to facilitate the secure remediation of affected systems​ (&lt;a href="https://www.rapid7.com/blog/post/2024/04/01/etr-backdoored-xz-utils-cve-2024-3094/" rel="noopener noreferrer"&gt;Rapid7&lt;/a&gt;)​.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The CVE-2024-3094 vulnerability serves as a stark reminder of the importance of maintaining robust security measures and the value of tools like orcharhino in facilitating effective vulnerability management. By staying informed and proactive in applying necessary updates, organizations can safeguard their systems against potential exploits, reinforcing their security posture in the face of evolving cyber threats.&lt;/p&gt;

&lt;p&gt;In facing such vulnerabilities, the collective efforts of the cybersecurity community, alongside tools that enable swift response and remediation, play a crucial role in ensuring the digital safety of users and organizations alike.&lt;/p&gt;

</description>
      <category>react</category>
      <category>discuss</category>
      <category>security</category>
      <category>automation</category>
    </item>
    <item>
      <title>IAM made easy. What is your experience with Keycloak?</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Tue, 27 Feb 2024 12:04:31 +0000</pubDate>
      <link>https://forem.com/atixag/iam-made-easy-what-is-your-experience-with-keycloak-43ii</link>
      <guid>https://forem.com/atixag/iam-made-easy-what-is-your-experience-with-keycloak-43ii</guid>
      <description>&lt;p&gt;IAM made easy. With KeyCloak, it is possible to secure applications and services with fine-grained authorizations. Whether it's a home lab, on-premise or a cloud infrastructure, KeyCloak simplifies user authentication and authorization for admins. Thanks to KeyCloak SSO, admins don't have to deal with user credentials or user storage.&lt;br&gt;
What is your experience with Keycloak?&lt;/p&gt;

</description>
      <category>programming</category>
      <category>cloud</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Event-Driven Ansible with a minimal example</title>
      <dc:creator>ATIXAG</dc:creator>
      <pubDate>Mon, 26 Feb 2024 07:01:44 +0000</pubDate>
      <link>https://forem.com/atixag/event-driven-ansible-2en0</link>
      <guid>https://forem.com/atixag/event-driven-ansible-2en0</guid>
      <description>&lt;p&gt;&lt;strong&gt;Event-Driven Ansible is here and it opens a whole new world of possibilities for working with Ansible. This article gives an introduction to it and shows a minimal example.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Event-Driven Ansible?
&lt;/h2&gt;

&lt;p&gt;Event-Driven Ansible is a new way of working with Ansible based on events. When a specific event occurs a corresponding action is triggered. This allows for immediate and automated response to issues or unexpected occurrences. It is currently available as a developer preview.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it Works
&lt;/h2&gt;

&lt;p&gt;The information about which events should be monitored and what actions should be taken are included in a so-called “Ansible Rulebook”.&lt;/p&gt;

&lt;p&gt;This is a file in YAML format which should include the following fields.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sources&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is a list of possible sources from which events can be gathered. At the moment there are already a few sources available. For example the webhook source plugin provides a &lt;code&gt;webhook&lt;/code&gt; which can be triggered from any application. The kafka plugin allows to receive events via a &lt;code&gt;kafka&lt;/code&gt; topic. You can find the current list of supported sources &lt;a href="https://ansible.readthedocs.io/projects/rulebook/en/stable/actions.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rules&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Rules describe what actions should be taken depending on specific events. Some of the possible actions are: &lt;code&gt;run_playbook&lt;/code&gt;, &lt;code&gt;run_job_template&lt;/code&gt;, &lt;code&gt;run_workflow_template&lt;/code&gt;, and many more. You can find a complete list here.&lt;/p&gt;

&lt;p&gt;A rulebook is started with the &lt;code&gt;ansible-rulebook&lt;/code&gt; CLI tool, available through &lt;code&gt;pip&lt;/code&gt;. Alternatively, for customers of the Ansible Automation Platform, there is also the possibility of installing the EDA controller: a web UI for Event-Driven Ansible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;Let’s have a look at a minimal example which demonstrates how Event-Driven Ansible works. We imagine a situation in which we have a webserver running and want to monitor it. We can do that with the following rulebook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# check_url_rulebook.yml
---
- name: Check webserver
  hosts: all
  sources:
    - ansible.eda.url_check:
        urls:
          - https://&amp;lt;webserver_fqdn&amp;gt;
        delay: 10
  rules:
    - name: Restart Nginx
      condition: event.url_check.status == "down"
      action:
        run_playbook:
          name: atix.eda.restart_nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This rulebook uses the &lt;code&gt;url_check&lt;/code&gt; plugin to query the webpage at &lt;code&gt;https://&amp;lt;webserver_fqdn&amp;gt;&lt;/code&gt; every 10 seconds. There is only one rule. When the URL check returns a status of &lt;code&gt;down&lt;/code&gt;, an Ansible playbook is automatically started. In this case, the Playbook started is installed under a private collection, &lt;code&gt;atix.eda&lt;/code&gt;, and simply tries to restart the &lt;code&gt;nginx&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# restart_nginx.yml
---
- hosts: all
  gather_facts: false
  tasks:
    - name: Restart Nginx
      ansible.builtin.service:
        name: nginx
        state: restarted
      become: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can start monitoring the webserver with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-rulebook --rulebook check_url_rulebook.yml -i inventory.yml --verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we must also pass an inventory file, &lt;code&gt;inventory.yml&lt;/code&gt;, containing the hosts to be addresssed by the &lt;code&gt;atix.eda.restart_nginx&lt;/code&gt; playbook. In this case, the inventory contains only one host, the webserver.&lt;/p&gt;

&lt;p&gt;The above command runs in the foreground and listens for events.&lt;/p&gt;

&lt;p&gt;When it hasn't received an event yet, the output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2023-11-29 13:53:07,183 - ansible_rulebook.rule_set_runner - INFO - Waiting for actions on events from Check url
2023-11-29 13:53:07 183 [drools-async-evaluator-thread] INFO org.drools.ansible.rulebook.integration.api.io.RuleExecutorChannel - Async channel connected
2023-11-29 13:53:07,184 - ansible_rulebook.rule_set_runner - INFO - Waiting for events, ruleset: Check url
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we stop the webserver by hand, we will see first that the event is registered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2023-11-29 13:54:37 407 [main] INFO org.drools.ansible.rulebook.integration.api.rulesengine.RegisterOnlyAgendaFilter - Activation of effective rule "Restart Nginx" with facts: {m={url_check={url=https://&amp;lt;webserver_fqdn&amp;gt;, status=down, error_msg=Cannot connect to host &amp;lt;webserver_fqdn&amp;gt; ssl:default [Connect call failed ('&amp;lt;webserver_ip&amp;gt;', 443)]}, meta={source={name=ansible.eda.url_check, type=ansible.eda.url_check}, received_at=2023-11-29T13:54:37.366718Z, uuid=709d45b8-803a-48e2-ad17-99d993c6e957}}}
2023-11-29 13:54:37,419 - ansible_rulebook.rule_generator - INFO - calling Restart Nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then that the corresponding playbook is started:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2023-11-29 13:54:37,423 - ansible_rulebook.action.run_playbook - INFO - ruleset: Check url, rule: Restart Nginx
2023-11-29 13:54:38,425 - ansible_rulebook.action.run_playbook - INFO - Calling Ansible runner

PLAY [all] *********************************************************************

TASK [Restart Nginx] ***********************************************************
changed: [&amp;lt;webserver_fqdn&amp;gt;]

PLAY RECAP *********************************************************************
&amp;lt;webserver_fqdn&amp;gt;         : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
2023-11-29 13:54:44,628 - ansible_rulebook.action.runner - INFO - Ansible runner Queue task cancelled

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Event-Driven Ansible has the potential to revolutionize the way of dealing with issues. The project is still in its early stages. Let us hope that new exciting event sources are added in the coming months.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
