<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tarek N. Elsamni</title>
    <description>The latest articles on Forem by Tarek N. Elsamni (@tareksamni).</description>
    <link>https://forem.com/tareksamni</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tareksamni"/>
    <language>en</language>
    <item>
      <title>Introducing 'terraform-state-split': A New Tool for Reorganizing Terraform States</title>
      <dc:creator>Tarek N. Elsamni</dc:creator>
      <pubDate>Thu, 06 Jul 2023 09:07:14 +0000</pubDate>
      <link>https://forem.com/tareksamni/introducing-terraform-state-split-a-new-tool-for-reorganizing-terraform-states-4om0</link>
      <guid>https://forem.com/tareksamni/introducing-terraform-state-split-a-new-tool-for-reorganizing-terraform-states-4om0</guid>
      <description>&lt;p&gt;As many Terraform users know, managing and organizing resources across different states can be a complex task. The conventional methods often require substantial manual effort and technical expertise. Today, I'm thrilled to share a tool that I've developed to simplify this process - the 'terraform-state-split' CLI.&lt;/p&gt;

&lt;p&gt;The 'terraform-state-split' CLI is an intuitive, interactive tool designed to facilitate the transition of Terraform resources between different states. This tool not only helps to reduce the size of your Terraform states, but it also enables you to divide your resources into multiple state files or reorganize your Terraform states as per your requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real-life Success Story: From 1400 Resources to Manageable Chunks
&lt;/h2&gt;

&lt;p&gt;The real power of 'terraform-state-split' is highlighted through its practical application. I had been grappling with a giant Terraform state that comprised over 1400 resources. Such a massive state was cumbersome to manage, created complexities during updates or modifications and consumed so much time during running plans to do minor changes.&lt;/p&gt;

&lt;p&gt;With 'terraform-state-split', I was able to break down this monolithic Terraform state into multiple smaller, more manageable states. This didn't just improve the organization of the resources, but it also made modifications and updates considerably more straightforward and less error-prone. The tool has truly transformed the way I work with Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  DEMO
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/qqF2E5Uz2ybwzhJdMpuufzblu?ref=shebanglabs.io"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zAgi8oZq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://asciinema.org/a/qqF2E5Uz2ybwzhJdMpuufzblu.svg" alt="Foo" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Here’s How You Can Get Started
&lt;/h2&gt;

&lt;p&gt;Ready to give 'terraform-state-split' a try? Here's how you can get it installed:&lt;/p&gt;

&lt;p&gt;First, tap into the shebang-labs tap via Homebrew with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew tap shebang-labs/tap&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, install the 'terraform-state-split' CLI:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew install terraform-state-split&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And that's it! You are all set to start leveraging the power of 'terraform-state-split' CLI to better manage and reorganize your Terraform states.&lt;/p&gt;

&lt;p&gt;The source code for this CLI is publicly accessible.&lt;/p&gt;

&lt;p&gt;You can find it at: &lt;a href="https://github.com/shebang-labs/terraform-state-split"&gt;https://github.com/shebang-labs/terraform-state-split&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I encourage you to explore it, play around with it, and provide your valuable feedback.&lt;/p&gt;

&lt;p&gt;In conclusion, 'terraform-state-split' is a tool developed out of necessity, and it embodies the essence of "making things easier." So, whether you're looking to reduce your Terraform states, divide them into multiple state files, or simply reorganize them, give 'terraform-state-split' a try and see the difference for yourself.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>iac</category>
      <category>cloudnative</category>
      <category>automation</category>
    </item>
    <item>
      <title>Horizontal scaling WebSockets on Kubernetes and Node.js</title>
      <dc:creator>Tarek N. Elsamni</dc:creator>
      <pubDate>Tue, 09 Feb 2021 00:04:09 +0000</pubDate>
      <link>https://forem.com/tareksamni/horizontal-scaling-websockets-on-kubernetes-and-node-js-1121</link>
      <guid>https://forem.com/tareksamni/horizontal-scaling-websockets-on-kubernetes-and-node-js-1121</guid>
      <description>&lt;p&gt;The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with &lt;a href="https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="noopener noreferrer"&gt;custom metrics&lt;/a&gt; support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the Horizontal Pod Autoscaler work?
&lt;/h2&gt;

&lt;p&gt;The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2F4fe1ef7265a93f5f564bd3fbb0269ebd10b73b4e%2F1775d%2Fimages%2Fdocs%2Fhorizontal-pod-autoscaler.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2F4fe1ef7265a93f5f564bd3fbb0269ebd10b73b4e%2F1775d%2Fimages%2Fdocs%2Fhorizontal-pod-autoscaler.svg" alt="Horizontal Pod Autoscaler diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To learn more about how Kubernetes HPA works you can read &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;this detailed article from the official kubernetes.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The most common example of the HPA configurations are based on CPU/Memory utilisation metrics provided by &lt;a href="https://github.com/kubernetes-sigs/metrics-server" rel="noopener noreferrer"&gt;metrics-server&lt;/a&gt;. In this article I'll give an example of scaling up/down a Kubernetes deployment based on application-specific custom metrics. The application will be a &lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; (&lt;a href="https://expressjs.com/" rel="noopener noreferrer"&gt;Express&lt;/a&gt;) server with &lt;a href="https://github.com/websockets/ws" rel="noopener noreferrer"&gt;WebSockets&lt;/a&gt; support and the goal will be to scale up/down the deployment based on number of connected clients (connections count).&lt;/p&gt;

&lt;p&gt;To achieve this goal, this post will focus on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Creating a demo app with WebSocket support.&lt;/li&gt;
&lt;li&gt; Integrating &lt;a href="https://www.npmjs.com/package/prometheus-client" rel="noopener noreferrer"&gt;prometheus-client&lt;/a&gt; to expose WebSocket stats as a prometheus metric.&lt;/li&gt;
&lt;li&gt; Configuring &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; to harvest the exposed metrics.&lt;/li&gt;
&lt;li&gt; Setting up &lt;a href="https://github.com/kubernetes-sigs/prometheus-adapter" rel="noopener noreferrer"&gt;prometheus-adapter&lt;/a&gt; to convert the prometheus metric to HPA complaint metric.&lt;/li&gt;
&lt;li&gt; Configuring HPA to utilise and consume the complaint metric.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Creating a demo app with WebSocket support
&lt;/h3&gt;

&lt;p&gt;The following code will create a demo Express app and integrate WebSocket on &lt;code&gt;/ws/&lt;/code&gt; path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fwebsocket.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fwebsocket.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/app.js" rel="noopener noreferrer"&gt;https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/app.js&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating &lt;a href="https://www.npmjs.com/package/prometheus-client" rel="noopener noreferrer"&gt;prometheus-client&lt;/a&gt; to expose WebSocket stats as a prometheus metric
&lt;/h3&gt;

&lt;p&gt;The following code will integrate a prometheus client and expose a prometheus standard/complaint &lt;code&gt;websockets_connections_total&lt;/code&gt; metric on port 9095. Next step is to guide prometheus to start harvesting and collecting this metric and persist the stats over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fprometheus-client.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fprometheus-client.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/app.js" rel="noopener noreferrer"&gt;https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/app.js&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; to harvest the exposed metrics
&lt;/h3&gt;

&lt;p&gt;In this stage, I will use &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; to deploy prometheus on the kubernetes cluster. First, we need to add the helm repo for prometheus using this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then we can install prometheus with a persistent volume to store and persist the metrics data over time with the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install prometheus prometheus-community/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2",server.persistentVolume.storageClass="gp2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;At this point we should have the prometheus components up and running perfectly on the kubernetes clsuter on the &lt;code&gt;prometheus&lt;/code&gt; namespace as shown in the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fprometheus-namespace-kubernetes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fprometheus-namespace-kubernetes.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus Namespace (Kubernetes)&lt;/p&gt;

&lt;p&gt;To guide prometheus to start scraping/collecting the application exposed metric &lt;code&gt;websockets_connections_total&lt;/code&gt; over time, we need to annotate the pod which runs the Express app with the following annotations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prometheus.io/scrape: 'true'
prometheus.io/port: '9095'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So the application deployment would look something like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fdeployment.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fdeployment.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/deployment.yaml" rel="noopener noreferrer"&gt;https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/deployment.yaml&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up &lt;a href="https://github.com/kubernetes-sigs/prometheus-adapter" rel="noopener noreferrer"&gt;prometheus-adapter&lt;/a&gt; to convert the prometheus metric to HPA complaint metric
&lt;/h3&gt;

&lt;p&gt;At this stage Prometheus is scraping the metrics every 1 second from port 9095 from all pods in this deployment. To verify this, you can port-forward the prometheus server to localhost and access its query/dashboard UI using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;which will make the dashboard accessible on &lt;code&gt;localhost:9090&lt;/code&gt;. Then you can search for &lt;code&gt;websockets_connections_total&lt;/code&gt; to see the scraped metrics over time as shown here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fprometheus-metric.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fprometheus-metric.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example the query returned 2 graphs as there are 2 pods in this deployment generating different &lt;code&gt;websockets_connections_total&lt;/code&gt; values. One of the pods has 1-2 websocket connections overtime and the other has 0 connections.&lt;/p&gt;

&lt;p&gt;In the next step we will start using averages (sum of reported connections counts from different pods / pods count) to decide on how scale up and down. But first we need to transform this Prometheus metrics into HPA complaint metric. We can achieve this using &lt;code&gt;prometheus-adapter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can install &lt;code&gt;prometheus-adapter&lt;/code&gt; as a helm chart. You need to point the adapter to the prometheus instance to query the data from there. Also you will need to tell the adapter how to query the metrics, transform and format it.&lt;/p&gt;

&lt;p&gt;This can be done using the following custom helm configurations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prometheus:
  url: http://prometheus-server.prometheus.svc
  port: 80

rules:
  custom:
    - seriesQuery: '{__name__=~"^myapp_websockets_connections_total$"}'
      resources:
        overrides:
          kubernetes_namespace:
            resource: namespace
          kubernetes_pod_name:
            resource: pod
      name:
        matches: "^(.*)_total"
        as: "${1}_avg"
      metricsQuery: (avg(&amp;lt;&amp;lt;.Series&amp;gt;&amp;gt;{&amp;lt;&amp;lt;.LabelMatchers&amp;gt;&amp;gt;}) by (&amp;lt;&amp;lt;.GroupBy&amp;gt;&amp;gt;))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;prometheus-adapter-values.yaml&lt;/p&gt;

&lt;p&gt;Now, you can use this file to install a custom &lt;code&gt;prometheus-adapter&lt;/code&gt; as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install prometheus-adapter prometheus-community/prometheus-adapter --values=./prometheus-adapter-values.yaml --namespace prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify that the adapter did work as expected, you should be able to query the HPA custom metrics using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# I'm using jq for better formatting. You can omit it if needed.
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/myapp-namespace/pods/*/myapp_websockets_connections_avg" | jq .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should show a result like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fmyapp-custom-metrics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fmyapp-custom-metrics.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring HPA to utilise and consume the complaint metric
&lt;/h3&gt;

&lt;p&gt;Using the following HPA definition we can control the deployment scaling up and down configs based on the avg websockets connections per pod:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fhpa-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.shebanglabs.io%2Fcontent%2Fimages%2F2021%2F02%2Fhpa-1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/hpa.yaml" rel="noopener noreferrer"&gt;https://github.com/shebang-labs/websocket-prometheus-hpa-example/blob/main/hpa.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, I've configured the min replicas to be &lt;code&gt;2&lt;/code&gt; and the max to be &lt;code&gt;10&lt;/code&gt; and then Kubernetes will use the &lt;code&gt;myapp_websockets_connections_avg&lt;/code&gt; value over time to align with the target &lt;code&gt;5 connections per pod&lt;/code&gt; and it will scale up and down dynamically to match this target 🎉🎉&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>node</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>Rails Antivirus validator as a service on K8s</title>
      <dc:creator>Tarek N. Elsamni</dc:creator>
      <pubDate>Wed, 05 Feb 2020 10:44:06 +0000</pubDate>
      <link>https://forem.com/tareksamni/rails-antivirus-validator-as-a-service-on-k8s-1fg9</link>
      <guid>https://forem.com/tareksamni/rails-antivirus-validator-as-a-service-on-k8s-1fg9</guid>
      <description>&lt;p&gt;Originally posted on &lt;a href="https://www.shebanglabs.io/rails-antivirus-clamby-clamav/"&gt;https://www.shebanglabs.io/rails-antivirus-clamby-clamav/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Carrierwave, ClamAV and Clamby
&lt;/h1&gt;

&lt;p&gt;If you are building a web application, you definitely will want to enable file uploading. File uploading is an important feature in modern-day applications. &lt;a href="https://github.com/carrierwaveuploader/carrierwave"&gt;Carrierwave&lt;/a&gt; is a famous ruby gem that works perfectly with Rack based web applications, such as Ruby on Rails to provide file uploading out of the box with a long list of other features around this speciality.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you have a file upload on your web application and you do not scan the files for viruses then you not only compromise your software, but also the users of the application and their files.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To avoid such scenarios we tend often to whitelist allowed &lt;a href="https://github.com/carrierwaveuploader/carrierwave/blob/master/lib/carrierwave/uploader/extension_whitelist.rb"&gt;file extensions&lt;/a&gt; and &lt;a href="https://github.com/carrierwaveuploader/carrierwave/blob/master/lib/carrierwave/uploader/content_type_whitelist.rb"&gt;content types&lt;/a&gt;. This approach might not be enough if you decided to allow/whitelist executable uploads or if the attacker is uploading a malicious image or any file of an allowed file extension or content-type.&lt;/p&gt;

&lt;p&gt;In this tutorial, I will show you how to utilize Rails &lt;code&gt;ActiveModel::Validator&lt;/code&gt; class to build a modular validator to scan each file upload in real-time using &lt;a href="https://www.clamav.net/"&gt;ClamAV&lt;/a&gt; and &lt;a href="https://github.com/kobaltz/clamby"&gt;Clamby&lt;/a&gt; gem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ClamAV® is an open source antivirus engine for detecting trojans, viruses, malware &amp;amp; other malicious threats.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Clamby gem depends on the &lt;code&gt;clamscan&lt;/code&gt; daemons to be installed already. If you installed &lt;code&gt;clamscan&lt;/code&gt; and tried to run Clamby, you will notice that it takes few seconds (around ~10 depending on available computing resources). This is because every time you run a scan, a new process of &lt;code&gt;clamscan&lt;/code&gt; gets initiated to run the scan which takes some time to load the antivirus database, check viruses signatures, run other boating routines and finally start the actual scan.&lt;/p&gt;

&lt;p&gt;To overcome this issue. Clamby creator is highly recommending to use the &lt;code&gt;daemonize&lt;/code&gt; set to &lt;code&gt;true&lt;/code&gt; option. This will allow for &lt;code&gt;clamscan&lt;/code&gt; to remain in &lt;strong&gt;memory&lt;/strong&gt; and will not have to load for each virus scan. It will save several seconds per request.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The bad news is a single process of ClamAV is consuming an average of 600-800MB.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For every rails server/pod running you will consume such expensive memory for nothing but preloading the viruses database in memory to deliver real-time antivirus scans!&lt;/p&gt;

&lt;p&gt;Fortunately, ClamAV has a TCP/IP socket based interface. Which means we could run a single shared process and access it remotely using TCP/IP sockets. Or even better to run a cluster of distributed processes and loadbalance the virus scans across them. This sounds like a good plan 👌.&lt;/p&gt;

&lt;h1&gt;
  
  
  Assumptions And Prerequisites
&lt;/h1&gt;

&lt;p&gt;The following part of this post will show you how to deploy ClamAV as a service on K8s, access it from other pods (Rails) over a TCP/IP socket and how to configure Rails to utilize this service in a modular and &lt;a href="https://deviq.com/don-t-repeat-yourself/"&gt;DRY&lt;/a&gt; implementation.&lt;/p&gt;

&lt;p&gt;This post makes the following assumptions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have basic knowledge of &lt;a href="https://docs.bitnami.com/containers/how-to/deploy-custom-nodejs-app-bitnami-containers/#step-3-build-the-docker-image"&gt;how to build Docker images&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You have a &lt;a href="https://www.docker.com/get-docker"&gt;Docker environment&lt;/a&gt; running.&lt;/li&gt;
&lt;li&gt;You have a &lt;a href="https://docs.bitnami.com/kubernetes/get-started-kubernetes#option-1-create-a-cluster-using-minikube"&gt;Kubernetes cluster&lt;/a&gt; running.&lt;/li&gt;
&lt;li&gt;Your &lt;a href="https://www.shebanglabs.io/ruby-on-rails-on-kubernetes/"&gt;Ruby on Rails application is containerized and running on - Kubernetes&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You have the &lt;a href="https://docs.bitnami.com/kubernetes/get-started-kubernetes#step-3-install-kubectl-command-line"&gt;kubectl command line (kubectl CLI)&lt;/a&gt; installed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Step 1: Deploy ClamAV as a service on Kubernetes
&lt;/h1&gt;

&lt;p&gt;To deploy ClamAV on Kubernetes, you need to configure a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;kubernetes deployment&lt;/a&gt; and make it accessible through a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/"&gt;kubernetes service&lt;/a&gt;. The service will expose the deployment using a FDQN DNS that loadbalances the traffic to the deployment replicas without any unfamiliar service discovery mechanisms (which makes the antivirus horizontally scalable).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The kubernetes deployment will look like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# k8s/clamav-deployment.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clamav&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;minReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clamav&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clamav&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/ukhomeofficedigital/clamav:v1.7.1&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3310&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
          &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/readyness.sh&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The exposing service will look like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# k8s/clamav-service.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;antivirus-svc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clamav&lt;/span&gt;
  &lt;span class="na"&gt;clusterIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zombie-port&lt;/span&gt; &lt;span class="c1"&gt;# Actually, we do not use this port but it is still needed to allow the service to receive TCP traffic.&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1234&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1234&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can create the deployment and its exposing service using kubectl as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s/clamav-deployment.yaml &lt;span class="nt"&gt;-f&lt;/span&gt; k8s/clamav-service.yaml
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; shared get svc
NAME             TYPE        CLUSTER-IP  EXTERNAL-IP  PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
antivirus-svc    ClusterIP   None        &amp;lt;none&amp;gt;       1234/TCP  20s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 2: Configure Clamby to use ClamAV service
&lt;/h1&gt;

&lt;p&gt;As shown in the previous step, ClamAV is now up and running as a kubernetes deployment with 1 replica (you could add more replicas to make it horizontal scalable) and listening to port 3310 with protocol TCP. Also, the kubernetes service will make sure that the traffic going to &lt;code&gt;antivirus-svc.shared.svc.cluster.local&lt;/code&gt; is being load balanced across the replicas automagically.&lt;/p&gt;

&lt;p&gt;To configure Clamby ruby gem to connect to the ClamAV daemon at &lt;code&gt;antivirus-svc.shared.svc.cluster.local&lt;/code&gt; using port &lt;code&gt;3310&lt;/code&gt; and over &lt;code&gt;TCP&lt;/code&gt; sockets we need to use the following Rails initializer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/initializers/clamby.rb&lt;/span&gt;

&lt;span class="n"&gt;clamby_configs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="ss"&gt;daemonize: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;clamby_configs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:config_file&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'/etc/clamav/clamd.conf'&lt;/span&gt;

&lt;span class="no"&gt;Clamby&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clamby_configs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This initializer is instructing the Clamby gem to use a clamav config file located at &lt;code&gt;/etc/clamav/clamd.conf&lt;/code&gt;. This file is not created yet but we will now create it as a part of building the RoR docker image used to run the application.&lt;/p&gt;

&lt;p&gt;So, your RoR Dockerfile should look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM bitnami/rails:latest

&lt;span class="c"&gt;# Install OS dependencies&lt;/span&gt;
&lt;span class="c"&gt;# COPY Gemfile $APP_PATH/Gemfile&lt;/span&gt;
&lt;span class="c"&gt;# COPY Gemfile.lock $APP_PATH/Gemfile.lock&lt;/span&gt;

&lt;span class="c"&gt;# Install bundler&lt;/span&gt;
&lt;span class="c"&gt;# bundle install&lt;/span&gt;

&lt;span class="c"&gt;# COPY . $APP_PATH&lt;/span&gt;

&lt;span class="c"&gt;# Precompile assets&lt;/span&gt;

RUN &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"TCPSocket 3310"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/clamav/clamd.conf
RUN &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"TCPAddr antivirus-svc.shared.svc.cluster.local"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/clamav/clamd.conf

&lt;span class="c"&gt;# Entrypoint and CMD&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you run &lt;code&gt;rails c&lt;/code&gt; from a container running on the kubernetes cluster and using this Dockerfile image. Then you should be able to run the following command to do ClamAV scans using the remote service over TCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rails c&lt;/span&gt;
&lt;span class="no"&gt;Loading&lt;/span&gt; &lt;span class="n"&gt;development&lt;/span&gt; &lt;span class="n"&gt;environment&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Rails&lt;/span&gt; &lt;span class="mf"&gt;5.2&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="n"&gt;pry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;Clamby&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;virus?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'SOME_LOCAL_FILE_PATH'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="no"&gt;ClamAV&lt;/span&gt; &lt;span class="mf"&gt;0.101&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;25431&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="no"&gt;Fri&lt;/span&gt; &lt;span class="no"&gt;Apr&lt;/span&gt; &lt;span class="mi"&gt;26&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;33&lt;/span&gt; &lt;span class="mi"&gt;2019&lt;/span&gt;
&lt;span class="sr"&gt;/app/&lt;/span&gt;&lt;span class="no"&gt;SOME_LOCAL_FILE_PATH&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;OK&lt;/span&gt;
&lt;span class="kp"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# no virus 🎉&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 3: An activemodel validator to utilize Clamby
&lt;/h1&gt;

&lt;p&gt;After getting all of the infrastructure in place for running ClamAV as a remote service over TCP and configuring the RoR app to connect to it. It is time to write a modular, DRY and reusable ActiveModel validator that could be used to scan every file the user uploads in real-time.&lt;/p&gt;

&lt;p&gt;An ActiveModel validator could look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app/validators/antivirus_validator.rb&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AntivirusValidator&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveModel&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Validator&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;path&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exist?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="no"&gt;Clamby&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;virus?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:attribute_name&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_sym&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;I18n&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'infected_file'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="kp"&gt;private&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;public_send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:attribute_name&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_sym&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you could use the validator with the following one line inside any ActiveRecord model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app/models/some_model.rb&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SomeModel&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveRecord&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Base&lt;/span&gt;
  &lt;span class="n"&gt;mount_uploader&lt;/span&gt; &lt;span class="ss"&gt;:image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;PictureUploader&lt;/span&gt;
  &lt;span class="n"&gt;validates_with&lt;/span&gt; &lt;span class="no"&gt;AntivirusValidator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;attribute_name: &lt;/span&gt;&lt;span class="s1"&gt;'image'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whenever you need to scan a file uploaded by a mounted uploader in an ActiveModel object, all you need to do is to add the following validation to the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;validates_with&lt;/span&gt; &lt;span class="no"&gt;AntivirusValidator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;attribute_name: &lt;/span&gt;&lt;span class="s1"&gt;'image'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because the ClamAV process is preloaded, up and running already on the remote deployment. and because the deployment is running on the same kubernetes cluster so all traffic goes local. A file scan process takes &lt;strong&gt;~20ms&lt;/strong&gt; for small files &amp;lt; &lt;strong&gt;1MB&lt;/strong&gt; and little bit more for bigger files. Do not hesitate to scan every single file uploaded by the end users as the process is not expensive and everything is now in-place to do scans with an extra one line of code.&lt;/p&gt;

&lt;p&gt;Happy virus 🦠 scanning 👋&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ruby</category>
      <category>rails</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
