<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Asanka Boteju</title>
    <description>The latest articles on Forem by Asanka Boteju (@asankab).</description>
    <link>https://forem.com/asankab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/asankab"/>
    <language>en</language>
    <item>
      <title>Cluster Security Standards Enforcement Via Kyverno (Policy as Code)</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Tue, 29 Apr 2025 18:14:18 +0000</pubDate>
      <link>https://forem.com/asankab/cluster-security-standards-enforcement-via-kyverno-policy-as-code-4n50</link>
      <guid>https://forem.com/asankab/cluster-security-standards-enforcement-via-kyverno-policy-as-code-4n50</guid>
      <description>&lt;p&gt;&lt;strong&gt;Kyverno, an open-source Kubernetes policy engine that lets you write policies as simple YAML manifests.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kyverno has become increasingly important in today’s cloud-native world due to the growing adoption of Kubernetes and the increasing demand for security, compliance, and automation in cluster management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Kyverno:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security &amp;amp; Policy Enforcement&lt;/strong&gt;&lt;br&gt;
Today more and more organizations adopt Kubernetes, and managing multi-tenant clusters securely becomes critical. Kyverno helps you&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enforcing Pod Security Standards&lt;/strong&gt;&lt;br&gt;
-- Ensuring network policies are always defined&lt;br&gt;
-- Preventing usage of deprecated APIs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Governance &amp;amp; Compliance&lt;/strong&gt;&lt;br&gt;
Regulatory requirements such as GDPR and HIPAA need consistent policy enforcement. Kyverno helps you;&lt;br&gt;
-- Automate auditing of non-compliant resources&lt;br&gt;
-- Ensure labels, annotations, or resource limits are always set&lt;br&gt;
-- Implement multi-cluster governance&lt;br&gt;
-- Policy as Code, Kubernetes-Native&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unlike OPA/Gatekeeper, which uses a separate language—Rego, Kyverno &lt;br&gt;
Uses Kubernetes-native YAML for policies.&lt;/strong&gt;&lt;br&gt;
-- Easier for K8s users to adopt&lt;br&gt;
-- Policies look like other Kubernetes resources&lt;br&gt;
-- Great fit for GitOps workflows such as ArgoCD and Flux&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mutation &amp;amp; Generation Capabilities&lt;/strong&gt;&lt;br&gt;
Kyverno can mutate and generate resources dynamically&lt;br&gt;
-- Auto-inject sidecars/configurations&lt;br&gt;
-- Generate default network policies/configmaps&lt;br&gt;
-- Patch fields in newly created resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validation at Admission Time&lt;/strong&gt; Kyverno policies work with the Kubernetes Admission Controller to prevent invalid/non-compliant configurations before they go live.&lt;br&gt;
-- Helps shift security and compliance left&lt;br&gt;
-- Reduces production incidents due to misconfigurations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-cloud, Multi-cluster Support&lt;/strong&gt; With teams running hybrid environments across AWS, Azure, GCP, and on-prem, Kyverno ensures policy consistency across clusters.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Some of the cases we can use keyverno includes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block the creation of privileged pods (a common security best practice)&lt;/li&gt;
&lt;li&gt;Enforce resource requests/limits&lt;/li&gt;
&lt;li&gt;Label enforcement for workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Time for some hands-on!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lets see it in action with a simple demo to grasp the power of kyverno&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Kyverno:&lt;/strong&gt; run the below command in your terminal to install Kyverno&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/config/release/install.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Follow the below instructions to enforce resource requests and limits—this ensures that every container in a pod has CPU and memory requests and limits set.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;require-resources.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-resources
spec:
  validationFailureAction: Enforce
  rules:
    - name: check-resources
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "CPU and memory requests/limits must be set for all containers."
        foreach:
          - list: "spec.containers[]"
            pattern:
              resources:
                requests:
                  memory: "?*"
                  cpu: "?*"
                limits:
                  memory: "?*"
                  cpu: "?*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the below command, which will apply the “require-resources” policy to your Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f require-resources.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, now run the below code block in your terminal to create a resource and test the policy enforcement in action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: no-resources
spec:
  containers:
    - name: nginx
      image: nginx
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6invk5d20ucdlcr05w9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6invk5d20ucdlcr05w9e.png" alt="Image description" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see an error message as shown in the above screen capture, which details the reason for the error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Okay, now let's do this.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let's add the resource limits and try the resource creation. For that, run the below code in your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F733qedb762x143ak7szc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F733qedb762x143ak7szc.png" alt="Image description" width="677" height="603"&gt;&lt;/a&gt;&lt;br&gt;
You can see the resource created message, as now the resource you have created complies with the policy requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important — validationFailureAction&lt;/strong&gt;&lt;br&gt;
The &lt;code&gt;**validationFailureAction**&lt;/code&gt; field in Kyverno policies determines how the policy behaves when a validation rule fails:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;enforce&lt;/strong&gt;: The policy will block the resource from being created or updated if it does not comply with the policy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;audit&lt;/strong&gt;: The policy will allow the resource to be created or updated but will log a warning or violation in the policy report.&lt;/p&gt;

&lt;p&gt;This is just one of the many possibilities of Kyverno policy enforcement, and you can explore those also in a similar fashion.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Hope the information is useful. Thank you for your time&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kyverno</category>
    </item>
    <item>
      <title>An Intro To Generative AI Applications with Amazon Bedrock</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Wed, 22 Jan 2025 16:45:23 +0000</pubDate>
      <link>https://forem.com/asankab/an-intro-to-generative-ai-applications-with-amazon-bedrock-41el</link>
      <guid>https://forem.com/asankab/an-intro-to-generative-ai-applications-with-amazon-bedrock-41el</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22bhlvtyv7qq7t0z5ao5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22bhlvtyv7qq7t0z5ao5.png" alt="Image description" width="757" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock is a managed service by AWS that helps you build and scale GenAI applications. Bedrock allows you to access FMs (foundation models) that are provided by AWS as well as any other third-party vendor-provided FMs via API calls. Some of the major FMs included in this ever-growing list are FMs such as &lt;strong&gt;Cohere&lt;/strong&gt; and &lt;strong&gt;StabilityAI&lt;/strong&gt;, &lt;strong&gt;Meta&lt;/strong&gt;, &lt;strong&gt;AI21Labs&lt;/strong&gt;, and &lt;strong&gt;Anthropic&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok73a3tw2qrdwwynin4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok73a3tw2qrdwwynin4x.png" alt="Image description" width="203" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxktayme0tkn0aj22q95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxktayme0tkn0aj22q95.png" alt="Image description" width="197" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1k5ky4ns31iexxzx66z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1k5ky4ns31iexxzx66z.png" alt="Image description" width="233" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FMs are used as the baseline at the starting point of a model, which can be used to interpret and understand a language, converse in conversational messaging, and also to generate images based on your prompts.&lt;/p&gt;

&lt;p&gt;Different FMs have different specializations and are able to produce a range of outputs based on prompts with high levels of accuracy. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stability Diffusion Model&lt;/strong&gt; by Stability.AI is good for image generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GPT-4&lt;/strong&gt; is used by ChatGPT for natural language.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can adjust the inference parameters, weights, and other parameters specific to the FM and perform model evaluations to further refine the model to match your use-cases and organization needs.&lt;/p&gt;

&lt;p&gt;to start working with the FMs, Log into your AWS Console UI, then navigate to the Amazon Bedrock page, then select the model access page from the bottom left of the page. then click on the specific models button and select the FMs you would like to use in your API or application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd67p2tgs7mbnz7mebkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd67p2tgs7mbnz7mebkf.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon submitting the request, the selected models from the previous step will be available for you to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr7lng50mta47utk39uw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr7lng50mta47utk39uw.png" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
The model access you set on this page is region-specific.&lt;/p&gt;

&lt;p&gt;You are now ready to leverage Amazon Bedrock Foundation Models. Start building and scripting your applications to seamlessly interact with Amazon Bedrock and unleash its full potential!&lt;/p&gt;

&lt;p&gt;Good Luck!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>bedrock</category>
    </item>
    <item>
      <title>Amazon Managed Service for Apache Flink</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sun, 12 Jan 2025 06:58:58 +0000</pubDate>
      <link>https://forem.com/asankab/amazon-managed-service-for-apache-flink-31p3</link>
      <guid>https://forem.com/asankab/amazon-managed-service-for-apache-flink-31p3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Amazon Managed Service for Apache Flink&lt;/strong&gt; is a framework for processing data streams for usecases such as responsive analytics, ETL, and continued metric generation. &lt;/p&gt;

&lt;p&gt;Amazon Managed Service for Apache Flink supports languages such as Java, SCALA, and SQL and Opens up access to many other aws service destinations such as S3, DynamoDB, Aurora, SNS, SQS, Redshift, Cloudwatch, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pcnl8eocwgul7wy32qz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pcnl8eocwgul7wy32qz.png" alt="Image description" width="742" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS provides provisioned compute resources, parallel processing, automatic scaling, checkpoints, and snapshot application backups for you.&lt;/p&gt;

&lt;p&gt;You can also combine AWS Lambda with Amazon Managed Service for Apache Flink to enable more robust usecases that need data aggregation, conversions to different formats, enriching, and encryption. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>streaming</category>
    </item>
    <item>
      <title>Amazon Kinesis for Near Realtime Streaming</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sun, 12 Jan 2025 05:41:52 +0000</pubDate>
      <link>https://forem.com/asankab/amazon-kinesis-for-near-realtime-streaming-9jo</link>
      <guid>https://forem.com/asankab/amazon-kinesis-for-near-realtime-streaming-9jo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Streams&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Collect and store streaming data such as clickstream data, IoT, metrics, and logs in real time with Amazon Kinesis Data Streams.&lt;/p&gt;

&lt;p&gt;✅&lt;strong&gt;AWS fully manages&lt;/strong&gt; the infrastructure required to run Kinesis&lt;br&gt;
✅&lt;strong&gt;Pay-as-you-go&lt;/strong&gt; pricing model where you are billed based on your usage.&lt;br&gt;
✅&lt;strong&gt;Write your own producer and consumer&lt;/strong&gt; codes using KPL and KCL libraries.&lt;br&gt;
✅Shards to &lt;strong&gt;improve throughput&lt;/strong&gt;.&lt;br&gt;
✅Optionally, enable output data format conversions to &lt;strong&gt;Parquet or ORC&lt;/strong&gt; with the support of Glue Catalogs.&lt;br&gt;
✅Support for &lt;strong&gt;dynamic partitions&lt;/strong&gt;.&lt;br&gt;
✅Supports &lt;strong&gt;custom transformations&lt;/strong&gt; with the use of lambda functions&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Streaming&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ovya2r0h6vuchxlup4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ovya2r0h6vuchxlup4e.png" alt="Image description" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Destination&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y1kz78xuxtv6h16swkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y1kz78xuxtv6h16swkk.png" alt="Image description" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8cus60ryct1p4w0snlj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8cus60ryct1p4w0snlj.png" alt="Image description" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kinesis</category>
      <category>streaming</category>
      <category>aws</category>
    </item>
    <item>
      <title>Massively Scalable Processing &amp; Massively Parallel Processing</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sat, 11 Jan 2025 02:27:44 +0000</pubDate>
      <link>https://forem.com/asankab/massively-scalable-processing-massively-parallel-processing-5h5h</link>
      <guid>https://forem.com/asankab/massively-scalable-processing-massively-parallel-processing-5h5h</guid>
      <description>&lt;p&gt;&lt;strong&gt;Massively Scalable Processing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Real-time processing systems designed to efficiently process large volumes of data in a distributed, massively scalable manner are known as &lt;strong&gt;massively scalable processing.&lt;/strong&gt; Cloud-native solutions and distributed computing frameworks such as Hadoop and Spark are examples of such systems.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Features of MSP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal scalability&lt;/strong&gt; Increasing the number of nodes (machines) to spread processing and storage over several systems is known as horizontal scalability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parallelism&lt;/strong&gt; Dividing work into manageable portions that are handled concurrently by several nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fault tolerance&lt;/strong&gt; Systems can gracefully bounce back from node outages or hardware malfunctions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; Distributed data storage allows for scalability of data access by distributing data among several nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic Resource Allocation&lt;/strong&gt; Allocating resources automatically in response to demand and load is known as dynamic resource allocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;br&gt;
Making use of scalable processing frameworks for big data analytics, real-time data processing, and ETL pipelines.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Massively Parallel Processing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Systems that are performing massively parallel large-scale processing utilizing multiple processors are known as massively parallel processing.&lt;br&gt;
This approach is widely used in big data and analytics to handle massive datasets.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Features of MPP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parallelism&lt;/strong&gt; Several processors work on various aspects of a task at the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data partitioning&lt;/strong&gt; Data partitioning is the process of dividing data into portions that are dispersed among nodes and handled separately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture of Shared Nothing&lt;/strong&gt;&lt;br&gt;
Every node has its own independent storage, memory, and CPU. Therefore, there is no resource contention, and it improves the scalability and fault tolerance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query parallelism&lt;/strong&gt; SQL queries are divided and run concurrently on several nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Locality&lt;/strong&gt; To reduce data travel, computations are carried out on the nodes where the data is stored.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt;&lt;br&gt;
MPP architectures are used by database systems like Teradata, Snowflake, and Amazon Redshift to parallelize and spread queries across several nodes, allowing for quick query execution on large datasets.&lt;/p&gt;

</description>
      <category>data</category>
      <category>analytics</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>Types of Data Analytics by its Application and Distinct Purposes</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sun, 15 Dec 2024 16:36:10 +0000</pubDate>
      <link>https://forem.com/asankab/types-of-data-analytics-by-its-application-and-distinct-purposes-4nmf</link>
      <guid>https://forem.com/asankab/types-of-data-analytics-by-its-application-and-distinct-purposes-4nmf</guid>
      <description>&lt;p&gt;&lt;strong&gt;Data analytics can be categorized into four main types based on its applications and distinct purposes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Descriptive Analytics&lt;/strong&gt;&lt;br&gt;
The goal of descriptive analytics is to comprehend &lt;strong&gt;historical occurrences and turn data into insightful summaries&lt;/strong&gt;. A retail business might, for instance, examine sales data to find that overall sales increased by 40% during the most recent quarter. Dashboards and reports are frequently created for this purpose using tools like Tableau or Power BI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Diagnostic Analytics&lt;/strong&gt;&lt;br&gt;
By finding patterns and connections in data, diagnostic analytics &lt;strong&gt;seeks to determine the causes of particular outcomes&lt;/strong&gt;. For instance, a bank may discover a correlation between increased unemployment in a region and an increase in loan defaults, in which the borrower fails to repay a loan according to the agreed terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Predictive Analytics&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Predictive analytics forecasts future patterns by utilizing statistical models and past data&lt;/strong&gt;. E-commerce companies, for instance, can use machine learning algorithms like regression or classification to forecast the purchasing habits of their customers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Prescriptive Analytics&lt;/strong&gt;&lt;br&gt;
Prescriptive analytics takes one step further by &lt;strong&gt;integrating data with models and rules to recommend actions to get desired outcomes&lt;/strong&gt;. For instance, a ride-sharing app may use demand projections and traffic patterns to suggest the best prices during peak hours.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>data</category>
    </item>
    <item>
      <title>Native Policy Enforcement Engine in Kubernetes</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Fri, 06 Dec 2024 18:14:33 +0000</pubDate>
      <link>https://forem.com/asankab/native-policy-enforcement-engine-in-kubernetes-327g</link>
      <guid>https://forem.com/asankab/native-policy-enforcement-engine-in-kubernetes-327g</guid>
      <description>&lt;p&gt;This is about a policy engine that is native to Kubernetes and is used to develop, modify, and validate configurations for Kubernetes resources. Because policies are defined in YAML, this offers a declarative method of enforcing regulations without requiring developers to write code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Types of Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Validation Policies: Verify that resources adhere to certain specifications (such as necessary annotations or labels).&lt;/p&gt;

&lt;p&gt;Mutation Policies: Automatically change resources at runtime or during admission (e.g., inject default values, labels, or annotations).&lt;/p&gt;

&lt;p&gt;Resources can be created or synchronized using generation policies (e.g., make sure a ConfigMap is always present).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Scope of Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cluster Policy: applies to every namespace in the cluster.&lt;/p&gt;

&lt;p&gt;Policy: Only applicable to one namespace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Pattern Matching rules&lt;/strong&gt; enable configurable requirements for matching Kubernetes &lt;/p&gt;

&lt;p&gt;Resource fields using wildcard patterns ("&lt;em&gt;" or "?&lt;/em&gt;") and JSONPath expressions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Logs policy validation failure action audit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit:&lt;/strong&gt; Records policy infractions but does not prevent the development of resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enforce:&lt;/strong&gt; Prevents the generation of resources in events that the policy is broken.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Management of Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Supports conditional reasoning, such as rules that match and exclude.&lt;/p&gt;

&lt;p&gt;Certain namespaces, kinds, or labels may be the focus of policies.&lt;/p&gt;

&lt;p&gt;permits the integration of several regulations into a single policy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Contextual Dynamic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adds other data sources (such as API calls or Kubernetes ConfigMaps) to make policies more context-aware and dynamic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Usability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Policies, like Kubernetes manifests, are written in the well-known YAML format.&lt;/p&gt;

&lt;p&gt;No need to pick up a sophisticated DSL or a new programming language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Policy Reports:&lt;/strong&gt; produces reports for implemented policies that display the state of compliance, audit findings, and infractions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Integration of Webhooks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Functions as a real-time resource request interceptor for Kubernetes admission controller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Isolation of Namespaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In multi-tenant clusters, policies can be scoped to namespaces to isolate tenants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. The CLI Tool&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Policies can be tested locally before being applied to a cluster using the CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Personalized Materials&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;defines policies using Kubernetes CRDs (ClusterPolicy and Policy).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use Case Examples:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ensuring that resources have the necessary labels.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Adding default values to container limits and resource requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensuring security measures, such as limiting privileged containers or host networking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ConfigMaps synchronization between namespaces.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tool makes Kubernetes policy management easier for developers and operators by utilizing YAML and well-known Kubernetes concepts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hlir99339il22t6xf10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hlir99339il22t6xf10.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzve6wg6e667uk00bmi0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzve6wg6e667uk00bmi0u.png" alt="Image description" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wcu7em1cna88ixps0vn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wcu7em1cna88ixps0vn.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I hope this article was useful. Thank you!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>Amazon S3 Tables: A Game Changer in Analytics and Data Lake Space</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Fri, 06 Dec 2024 17:59:12 +0000</pubDate>
      <link>https://forem.com/asankab/amazon-s3-tables-a-game-changer-in-analytics-and-data-lake-space-2mjo</link>
      <guid>https://forem.com/asankab/amazon-s3-tables-a-game-changer-in-analytics-and-data-lake-space-2mjo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Simplified data management and optimized query performance for workloads at any scale and as and when data grows.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it offers:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Scalability.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. Enhanced Performance.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3. Fully managed service.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;4. seamless integration.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;5. Simplified Security.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Historically, businesses and consumers have utilized Amazon S3 as a data lake to store enormous volumes of data workloads, which fueled analytics, machine learning workloads, and innovation across industries. However, securing and managing the data as it scales efficiently was challenging and required building complicated systems, which became a costly operation.&lt;/p&gt;

&lt;p&gt;Amazon S3 tables, a new type of S3 bucket that is purpose-built to store tabular data and to lower cost for data at scale, make it easy to create and secure managed tables with just a few simple steps. You can define and manage access control policies and enforce security directly at the storage level.&lt;/p&gt;

&lt;p&gt;Goodbye! to the maintenance complexity, governance, and lower query performance-related issues that you had to deal with manually. The new S3 Tables feature provides native support for Apache Iceberg and minimizes the operational complexities associated with Iceberg. &lt;/p&gt;

&lt;p&gt;With S3 Tables, you get 3x faster, optimized query performance than querying tabular data that is stored in S3 buckets. To reduce your operational burden, AWS manages data compaction, snapshot management, and partitioning for you. S3 Tables also provides seamless integration with Amazon Athena, AWS Glue, and EMR services.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>machinelearning</category>
      <category>data</category>
    </item>
    <item>
      <title>Docker Images for Go (Golang) Small, Faster Docker Images and Security</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sat, 03 Aug 2024 08:23:13 +0000</pubDate>
      <link>https://forem.com/asankab/docker-images-for-go-golang-small-faster-docker-images-and-security-4j9i</link>
      <guid>https://forem.com/asankab/docker-images-for-go-golang-small-faster-docker-images-and-security-4j9i</guid>
      <description>&lt;p&gt;During the weekend I was doing some research on docker images specifically to be used with Go (Golang) applications. hence, thought of sharing the interesting findings as that might be useful to someone who's exploring the same for their tech project works.&lt;/p&gt;

&lt;p&gt;Here, I will be elaborating a few different ways we can build an image for Golang and also highlight some of the security considerations that we need to take into account when picking one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For this exercise I was using a simple Rest API developed using Go (Golang) using the Gin Framework.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Gin framework&lt;/strong&gt; is a popular web framework for Go (Golang) that is designed to be fast, easy to use, and highly efficient.&lt;br&gt;
 &lt;br&gt;
&lt;em&gt;&lt;strong&gt;Here's a brief summary of its key features and characteristics;&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance:&lt;/strong&gt; Gin is known for its high performance. It is one of the fastest Go web frameworks, providing a minimal overhead compared to other frameworks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fast HTTP Router:&lt;/strong&gt; Gin uses a fast HTTP router and supports routing with methods like GET, POST, PUT, DELETE, etc. It also supports middleware and route grouping.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Middleware Support:&lt;/strong&gt; Gin provides a way to use middleware to handle tasks such as logging, authentication, and other pre- or post-processing of requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;JSON Validation:&lt;/strong&gt; The framework offers built-in support for JSON validation and binding request data to Go structs, making it easier to work with JSON payloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Handling:&lt;/strong&gt; Gin has a structured way of handling errors and provides a central error management system, allowing you to handle errors gracefully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Template Rendering:&lt;/strong&gt; While Gin is primarily designed for API development, it supports HTML template rendering if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Handling:&lt;/strong&gt; Supports different methods for request handling including form data, JSON payloads, and URL parameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built-in Debugging:&lt;/strong&gt; Gin provides detailed error messages and debugging information that can be useful during development.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Key Components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Router:&lt;/strong&gt; The core component that handles routing of HTTP requests to the appropriate handlers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; A structure that carries the request and response data, and &lt;br&gt;
provides methods to handle them. It's used extensively within handlers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Engine:&lt;/strong&gt; The primary instance of the Gin application, which is configured with routes, middleware, and other settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Middleware:&lt;/strong&gt; Functions that execute during the request lifecycle, allowing you to perform tasks like logging, authentication, and more&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enough about Gin Framework :) now let's move into the main topic and talk about the Tests carried out.&lt;/strong&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Test 1 (Regular Docker Build)
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;In this Test we will be using a Official Standard/Regular Go Base Image&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Official Go Base Image
FROM golang:1.21.0

# Create The Application Working Directory
WORKDIR /app

# Copy and Download Dependencies
COPY go.mod go.sum .
RUN go mod download

# Copy Source and Build The Application
COPY . .
RUN go build -o main .

# Expose The Port
EXPOSE 8081
CMD ["./main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Image Size&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23l6i1xw05uizkurmpa9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23l6i1xw05uizkurmpa9.JPG" alt="Image description" width="710" height="53"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Test 2 (Multi-Stage Docker Build with Alpine Image)
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;In this Test we will be use the Alpine version of the Go Base Image which is slightly more lightweight&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Official Go Apline Base Image
FROM golang:1.21.0-alpine as builder

# Create The Application Directory
WORKDIR /app

# Copy and Download Dependencies
COPY go.mod go.sum .
RUN go mod download

# Copy The Application Source &amp;amp; Build
COPY . .
RUN go build -o main .

# Final Image Creation Stage
FROM alpine:3.19

WORKDIR /root/

# Copy The Built Binary
COPY --from=builder /app/main .

# Expose the port
EXPOSE 8081
CMD ["./main"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Image Size&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvxbr8po5lhv0zpionvi.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvxbr8po5lhv0zpionvi.JPG" alt="Image description" width="722" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here you can see that with a multi-staged build the image sizes are significantly reduced.&lt;/strong&gt;&lt;/p&gt;


&lt;h4&gt;
  
  
  Test 3 (Distroless Build)
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;In this Test we will be using a Googles Distroless Go Base Image. Distroless images are known for being lightweight and secure containing only the minimal files required. In this these debug shells and unnecessary packages removed. therefore, you sacrifice the flexibility of having a package manager and a shell.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build Stage
FROM golang:1.21.0 as builder

# Set The Application Directory
WORKDIR /app

# Copy and Download Dependencies
COPY go.mod go.sum .
RUN go mod download

# Copy The Application Source and Build the application
COPY . .
RUN CGO_ENABLED=0 go build -o main .

# Final Image Creation Stage
FROM gcr.io/distroless/static-debian12

# Copy the built binary
COPY --from=builder /app/main /
CMD ["/main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Image Size&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut1zixlfli3rxldd8u0u.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut1zixlfli3rxldd8u0u.JPG" alt="Image description" width="776" height="60"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;CGO_ENABLED=0:&lt;/strong&gt; This is an environment variable setting that disables CGO (C-Go). CGO is a feature of Go that allows Go packages to call C code. Setting CGO_ENABLED=0 ensures that the build does not depend on any C libraries, producing a fully static binary. This is useful for creating lightweight and portable Go binaries that can run on any system without requiring additional dependencies.&lt;/p&gt;

&lt;p&gt;Putting it all together, &lt;strong&gt;RUN CGO_ENABLED=0 go build -o main .&lt;/strong&gt; means that Docker will execute a command to build a Go application in the current directory, producing a static binary named main that does not depend on any C libraries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Image Size&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut1zixlfli3rxldd8u0u.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut1zixlfli3rxldd8u0u.JPG" alt="Image description" width="776" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Even though the Distroless images are slightly bigger than the alpine images the security threat/vulnerability consideration might compel to choose Distroless!. Hence think about all these facts and pick the one that matches your requirements&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alpine with Multi-Stage Builds&lt;/strong&gt; is a good choice if you need more control over the build process, and if compatibility issues with &lt;strong&gt;musl&lt;/strong&gt; &lt;strong&gt;libc&lt;/strong&gt; are not a concern. It provides flexibility and is smaller than many other base images, but still includes more components than Distroless images which can lead to potential security vulnerabilities/threats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Distroless Images&lt;/strong&gt; are ideal for maximizing security and minimizing the attack surface. They provide a very minimal runtime environment, which can be beneficial for production systems where security is a priority. However, you sacrifice the flexibility of having a package manager and a shell.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendation:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Alpine&lt;/strong&gt; if you need flexibility and control over the build environment, and if compatibility issues with &lt;strong&gt;musl&lt;/strong&gt; &lt;strong&gt;libc&lt;/strong&gt; are not a problem. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Google Distroless&lt;/strong&gt; if security and minimizing the attack surface are your top priorities, and you can ensure that your application has all its dependencies bundled properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Hope this was interesting and thank you for your time!&lt;/em&gt;&lt;/strong&gt; &lt;/p&gt;

</description>
      <category>docker</category>
      <category>go</category>
      <category>security</category>
      <category>imagesize</category>
    </item>
    <item>
      <title>AWS multi-region Serverless application variant.</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Thu, 06 Jun 2024 01:04:41 +0000</pubDate>
      <link>https://forem.com/asankab/aws-multi-region-serverless-application-variant-2348</link>
      <guid>https://forem.com/asankab/aws-multi-region-serverless-application-variant-2348</guid>
      <description>&lt;p&gt;Multi-Region applications comes in very handy when you want to deal with users from different geographical locations, eliminating latency issue depending on distance from the place where your users accesses your application and also helps in maintaining High-Availability and DR (Disaster Recovery) situations without disrupting your users in case of a regions downtime from your cloud service provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F400ogq902m39wmn4r8om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F400ogq902m39wmn4r8om.png" alt="Image description" width="800" height="638"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Below is a brief summary of the services used for the below variant.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rote53&lt;/strong&gt;:&lt;br&gt;
Amazon Route 53 is a scalable and highly available domain name system (DNS) web service designed to route end-user requests to internet applications by translating domain names into IP addresses. It also provides domain registration, DNS health checks, and integrates seamlessly with other AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway&lt;/strong&gt;: AWS API Gateway is a fully managed service that enables developers to create, publish, secure, and monitor RESTful and WebSockets APIs at scale. It seamlessly integrates with AWS services like Lambda, provides robust security features, and scales automatically to handle varying traffic loads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda&lt;/strong&gt;: AWS Lambda is a Serverless compute service that allows you to run code without provisioning or managing servers. You can execute code in response to events such as changes in data, shifts in system state, or user actions, and it automatically manages the compute resources required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DynamoDB&lt;/strong&gt;: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It is designed to handle large amounts of structured data and enables developers to offload the administrative burdens of operating and scaling distributed databases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Macie&lt;/strong&gt;: Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover, monitor, and protect sensitive data stored in Amazon S3. It helps you identify and safeguard your personally identifiable information (PII) and intellectual property, providing visibility into how this data is accessed and moved across your organization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Systems Manager Secret Manager&lt;/strong&gt;: AWS Systems Manager Secrets Manager is a service designed to help you protect access to your applications, services, and IT resources without the upfront cost and complexity of managing your own hardware security module (HSM) or physical infrastructure. It allows you to securely store, manage, and retrieve credentials, API keys, and other secrets through a centralized and secure service, providing fine-grained access control and auditing capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch&lt;/strong&gt;: Amazon CloudWatch is a monitoring and management service designed for developers, system operators, site reliability engineers (SREs), and IT managers. It provides data and actionable insights to monitor applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thank you for your time.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>multiregion</category>
      <category>apigateway</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Flow of Setting up Lambda function invocation via API Gateway to perform a DynamoDB read/write operation</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sun, 26 May 2024 14:48:50 +0000</pubDate>
      <link>https://forem.com/asankab/aws-lambda-faas-5bec</link>
      <guid>https://forem.com/asankab/aws-lambda-faas-5bec</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1hvhwnxwpn3h4nvt1ax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1hvhwnxwpn3h4nvt1ax.png" alt="Image description" width="756" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typical Authenticated Lambda function invocation via API Gateway which will perform a DynamoDB read/write operation&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Acquiring a token passing credentials.&lt;/li&gt;
&lt;li&gt;Invoking the API Gateway endpoint passing the JWT token.&lt;/li&gt;
&lt;li&gt;API Gateway validates the token.&lt;/li&gt;
&lt;li&gt;Upon successful Auth token validation, API Gateway routes the request to the desired destination lambda function.&lt;/li&gt;
&lt;li&gt;Retrieve any credentials stored.&lt;/li&gt;
&lt;li&gt;Retrieve / Save the incoming payload to the database.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
      <category>apigateway</category>
      <category>cognito</category>
    </item>
    <item>
      <title>Amazon Macie to detect sensitive data from your S3 Buckets</title>
      <dc:creator>Asanka Boteju</dc:creator>
      <pubDate>Sun, 26 May 2024 07:58:55 +0000</pubDate>
      <link>https://forem.com/asankab/amazon-macie-to-detect-sensitive-data-from-s3-buckets-1eol</link>
      <guid>https://forem.com/asankab/amazon-macie-to-detect-sensitive-data-from-s3-buckets-1eol</guid>
      <description>&lt;p&gt;Leaking data or sensitive information exposure can lead to many insecurities to your organization including loss of business reputation and trust as well as long-term financially losses. Therefore, security is something we should seriously look at including applying security prevention, detection guardrails, monitoring, remediation and governance to stay on top of security of your businesses and its applications. To manage these sort of issues AWS provides a variety of security services that can be applied at different levels to safe-guard you and your customers business data while uplifting your businesses security posture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm84f9rvov58rpns9cie.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm84f9rvov58rpns9cie.PNG" alt="Image description" width="646" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Macie is a fully managed, ML &amp;amp; pattern matching service that helps with data security and data privacy concerns. Macie can provide a details list of sensitive information it can find in your S3 Buckets, so you can review them and take action. Actions can be done manually or by automating based on event using services like lambda and step functions. Automating prevent delays and human errors allowing you to act instantly to remediate or alerts on threats. Your content will be scanned based on pre-defined AWS defined rules as well as any rules you define by your own (custom rules). Macie has a native integration with AWS Organizations to allow centrally govern and perform scaled operations across your organization.&lt;/p&gt;

&lt;p&gt;Machine can find PII (Personally Identifiable Information) such as Name, Address and contact details. National Information such as your Passports, Identities, Drivers License and Social Security Numbers), Medical Information such as Medical data, pharmacy information and even credentials and keys such as AWS Secret Keys and Private Keys etc.&lt;/p&gt;

&lt;p&gt;That's not all, Macie can scan and detect threats related to PFI (Personal Financial Information) such as Credit Card Numbers and Bank Account Details also. Macie will scan and detect threats and present them in the form of findings via different AWS services such as Macie's Console, Macie's API's, Amazon EventBridge and Security Hub.&lt;/p&gt;

&lt;p&gt;In order to scan and proceed with the threat detection within the data stored in your S3 Buckets, it uses a Service Linked Role to acquire necessary permission to create an inventory of all your s3 buckets, monitor, collect statistics, analyze the object and detect sensitive information. Macie also create metadata about all your S3 buckets, Usually these metadata gets refreshed every 24h as part of Macie's refresh cycle and you can also trigger it manually from the Macie's Console every 5 minutes. The metadata captures below will be use for on-going and future threat detection operations. &lt;/p&gt;

&lt;p&gt;Macie will create a finding for each threats it detects from the moment you enabled Macie. For example, if someone disables the default encryption for a bucket, it will create a finding for you to review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some of the captured metadata includes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name&lt;/li&gt;
&lt;li&gt;ARN&lt;/li&gt;
&lt;li&gt;Creation Date&lt;/li&gt;
&lt;li&gt;Account-Level Access/Permissions&lt;/li&gt;
&lt;li&gt;Shared/Cross-Account Access and Replication Settings&lt;/li&gt;
&lt;li&gt;Object Counts etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During the scanning and threat detection activity Macie looks for &lt;strong&gt;Unencrypted buckets, Publicly accessible bucket and Buckets shared with other accounts without an explicit allowed defined&lt;/strong&gt; and then analyze and collect findings for the below listed categories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- S3BucketPublicAccessDisabled&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- S3BucketEncryptionDisabled&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- S3BucketPublic&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- S3BucketReplicatedExternally&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- S3BucketSharedExternally&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With each finding will have a severity defined and general information about the threat including bucket name, when and how Macie was able detect the threat. These findings will be available for 90 days from the date the scan triggered and collected the information and can be viewed and explored from &lt;strong&gt;Macie's Console, Macie's APIs, EventBridge and AWS Security Hub&lt;/strong&gt; to take necessary security precautions to mitigate the detected issues. You can also suppress findings, if you are sure that those are based on your comp policies and regulations that's in place.  &lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Important to Note:&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
Your S3 Buckets could have different Server-Side/Client Side Encryptions configured and depending on the method configured for each Bucket or all the buckets as a whole, there are some implication that prevents Amazon Macie from analyzing and detecting threats from your S3 Buckets.&lt;/p&gt;

&lt;p&gt;For instance, you could have used SSE-S3, SSE-KMS Server-Side Encryptions configured for your S3 Buckets, if that's the case then no issues Macie can scan, detect and report threats.&lt;/p&gt;

&lt;p&gt;However, if you used CMK (&lt;em&gt;Customer Managed Keys&lt;/em&gt;) for encrypting your S3 data then you have to explicitly allow Macie to use that key during the execution of the Sensitive Data discovery job which can be configured to run either one time, daily, weekly or monthly basis and collects findings, otherwise Macie will find it difficult to proceed with the Job of analyzing and detecting threats as it can not decrypt the data.&lt;/p&gt;

&lt;p&gt;Similarly, for SSE-C (&lt;em&gt;Server-side encryption with customer provided keys&lt;/em&gt;) Macie is unable to decrypt analyze and detect threats therefore Macie will just report metadata about your Buckets, same goes for any S3 Buckets configured to use Client-Side-Encryption. &lt;/p&gt;

&lt;p&gt;Also note that Macie will not be able to Analyze and detect threats in Audio, Video, Image files for that you may have to use another service from AWS like Amazon Rekognition.&lt;/p&gt;

&lt;p&gt;Further, It is key to keep in mind that an organization can only have a single administrator account at a given time. And an account cannot be both a Macie administrator as well as a member account. &lt;/p&gt;

&lt;p&gt;However, If you ever wish to change the Macie administrator account then note that all member accounts will be removed. However, Macie will not be disabled from those member accounts. &lt;/p&gt;

&lt;p&gt;A member account can only be associated with one administrator at a given time and it is unable to disassociate itself from that administrator once the member account is associated to the administrator account.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thank you for your time...&lt;/em&gt;&lt;/p&gt;

</description>
      <category>s3</category>
      <category>security</category>
      <category>macie</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
