<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gianluigi Mucciolo</title>
    <description>The latest articles on Forem by Gianluigi Mucciolo (@ggiallo28).</description>
    <link>https://forem.com/ggiallo28</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ggiallo28"/>
    <language>en</language>
    <item>
      <title>Sleigh-Ride Cloud Chronicles: A 25-Day Journey Through Cloud and Magic</title>
      <dc:creator>Gianluigi Mucciolo</dc:creator>
      <pubDate>Sun, 30 Nov 2025 17:17:57 +0000</pubDate>
      <link>https://forem.com/aws-builders/sleigh-ride-cloud-chronicles-a-25-day-journey-through-cloud-and-magic-443e</link>
      <guid>https://forem.com/aws-builders/sleigh-ride-cloud-chronicles-a-25-day-journey-through-cloud-and-magic-443e</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"The reindeer are in Ibiza. The elves are on strike. Christmas hangs in the balance. And you, you have 25 days to help Santa save it."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction: The Night the North Pole Went Dark
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxawr09g45jtv1y7it5fz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxawr09g45jtv1y7it5fz.jpeg" alt="Abandoned North Pole workshop with piles of letters and a single lamp glowing in the darkness" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is December 1st, and something has gone terribly wrong at the North Pole.&lt;/p&gt;

&lt;p&gt;The elves have gone on strike - something about dental coverage for candy-cane-related injuries. The reindeer? Last spotted in Ibiza, sipping piña coladas and posting photos that Santa probably shouldn't see. The toy workshop stands frozen and silent. And in Santa's office, letters from children around the world are piling up, unopened, unanswered, turning from hope into heartbreak with each passing hour.&lt;/p&gt;

&lt;p&gt;For the first time in centuries, Santa is completely alone.&lt;/p&gt;

&lt;p&gt;He stares at the infrastructure he has relied on for centuries, now collapsed overnight. Desperate, he calls the only organization that might possibly help: Amazon. But they don't send replacement elves or magical reindeer. They send him infrastructure. Cloud services. APIs. Foundation models.&lt;/p&gt;

&lt;p&gt;Santa looks at this pile of raw technical materials and realizes: if he is going to save Christmas, he needs to become the world's first Generative AI Cloud Architect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gw3oazkkoxgrqth7933.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gw3oazkkoxgrqth7933.jpeg" alt="Santa examining floating AWS cloud service holograms with a determined expression" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Journey Begins
&lt;/h2&gt;

&lt;p&gt;Santa takes a deep breath and opens the first letter. It's from a tech-savvy kid who uploaded it to an S3 bucket - full of emojis, memes, and AI-generated nonsense. Santa has no idea what half of it means.&lt;/p&gt;

&lt;p&gt;But he has 25 days. And he has AWS Bedrock.&lt;/p&gt;

&lt;p&gt;Day by day, challenge by challenge, Santa learns. He starts with the basics - teaching a foundation model to extract meaning from chaotic letters. Then he builds memory systems so he doesn't forget why Timmy painted the cat blue. He creates AI agents to help him: Rudy (anxious, brilliant, hyper-organized) and Elfie (enthusiastic, literal-minded, connected to the toy catalog). Together, they must learn to collaborate without causing disasters.&lt;/p&gt;

&lt;p&gt;Like the incident with the uranium-contaminated chemistry kit. We don't talk about that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus8r93fr2zyerpf16swl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus8r93fr2zyerpf16swl.jpeg" alt="Festive Advent calendar displaying 25 cloud and AI concepts as illustrated icons" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The story isn't just decoration - it's the framework that makes complex AI concepts stick. When you're helping Santa search through ten years of "Naughty and Nice" records, you're learning RAG (Retrieval-Augmented Generation). When you're teaching his agents to work together, you're learning multi-agent orchestration. The narrative gives you something to anchor to.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Journey
&lt;/h2&gt;

&lt;p&gt;The project is structured as 25 progressive challenges. You follow Santa as he rebuilds Christmas using generative AI. Underneath the narrative wrapping is a carefully architected progression from basic prompt engineering to multi-agent orchestration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvk5smvjjauh7ceo8swnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvk5smvjjauh7ceo8swnc.png" alt="Snowy winding roadmap illustrating a 25-day cloud and AI learning journey." width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1 (Days 1-6): From Letters to Data
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Narrative:&lt;/strong&gt; Santa learns to read his first digital letter - a tech-savvy kid uploaded it to an S3 bucket, full of emojis and AI-generated nonsense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech:&lt;/strong&gt; Fundamentals of prompt engineering, entity extraction, text-to-image generation, validation logic, and chunking strategies. You teach Santa how to use a foundation model to make sense of the chaos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2 (Days 7-12): Knowledge &amp;amp; Memory
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Narrative:&lt;/strong&gt; The past matters. Santa needs to track conversations without forgetting who "Timmy" is or why he painted the cat blue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech:&lt;/strong&gt; Vector stores, RAG patterns, multi-hop retrieval, session management, chain-of-thought reasoning, and qualitative ranking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3 (Days 13-18): Agents &amp;amp; Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Narrative:&lt;/strong&gt; Santa realizes he can't do this alone. Enter Rudy (his first AI agent: anxious, brilliant, hyper-organized) and Elfie (enthusiastic, literal-minded, connected to the toy catalog).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech:&lt;/strong&gt; Defining AI agents with personas, function calling, multi-agent collaboration, API integration, guardrails, and error handling. You must teach them to collaborate without causing disasters (like the incident with the uranium-contaminated chemistry kit).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3iosj8b1bxbzjzxodxt.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3iosj8b1bxbzjzxodxt.jpeg" alt="Reindeer agent and elf agent collaborating at a tech workstation with holographic data" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4 (Days 19-23): The Autonomous Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Narrative:&lt;/strong&gt; Everything connects. The system must run on its own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech:&lt;/strong&gt; Pipeline orchestration, context caching, multimodal consistency, human-in-the-loop patterns for high-stakes decisions, and automated reporting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 5 (Days 24-25): Production Readiness
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Narrative:&lt;/strong&gt; A parent asks why their child got coal. You explain the reasoning. Then comes the final test: 5,000 letters, all at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech:&lt;/strong&gt; Explainability, observability, and a final comprehensive test that synthesizes everything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0apw7utjm4bkids8jnw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0apw7utjm4bkids8jnw.jpeg" alt="Futuristic dashboard showing Santa monitoring an automated AI pipeline" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The repository is organized to mirror how you would actually work through this. You don't need to read ahead or understand the whole system before starting. You just need to show up for Day 1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sleigh-ride-cloud-chronicles/
├── README.md
├── requirements.txt
├── datasets/          # Shared resources
├── resources/         # Constraints, personas, specs
├── day01/ through day25/
│   ├── task.md        # The narrative chapter &amp;amp; challenge
│   ├── input/         # Sample letters/data
│   └── output/        # Specification for success
└── utils/             # Your workspace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Time:&lt;/strong&gt; One concept per day. 1-2 hours max.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pace:&lt;/strong&gt; Work through it at whatever pace your life permits. The repository is static and self-contained.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt; You need an AWS account with Bedrock access, Python 3.9+, and a willingness to learn through story. No prior AI/ML experience is required.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Build
&lt;/h2&gt;

&lt;p&gt;By the end of these 25 days, you will have built a complete pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Letter ingestion and entity extraction&lt;/li&gt;
&lt;li&gt;Semantic search and behavior analysis&lt;/li&gt;
&lt;li&gt;Multi-agent orchestration where AI personas collaborate&lt;/li&gt;
&lt;li&gt;Validation layers that catch unsafe outputs&lt;/li&gt;
&lt;li&gt;Explainability logging so every decision can be audited&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1h9ewov4pcr3ih5k0iw5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1h9ewov4pcr3ih5k0iw5.jpeg" alt="Diagram showing letters flowing through an AI pipeline from extraction to delivery" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More importantly, you'll understand how these pieces fit together. How embeddings connect to retrieval. How retrieval connects to generation. How generation connects to validation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Mission
&lt;/h2&gt;

&lt;p&gt;Santa's workshop is silent. The letters are piling up. And somewhere in the cloud, there is infrastructure waiting to be discovered.&lt;/p&gt;

&lt;p&gt;Christmas hangs in the balance. And you have 25 days to help Santa save it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎁 &lt;strong&gt;Begin Your 25-Day Cloud + AI Journey&lt;/strong&gt;&lt;br&gt;
Start working through the challenges at your own pace in the &lt;a href="https://github.com/ggiallo28/sleigh-ride-cloud-chronicles/tree/main" rel="noopener noreferrer"&gt;Sleigh-Ride Cloud Chronicles GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vjod9pgqjixm8mjxjar.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vjod9pgqjixm8mjxjar.jpeg" alt="Santa's workshop lighting up with a laptop showing 'Day 1' as Christmas begins" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>challenge</category>
      <category>cloud</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>PartyRock: Unleashing Creativity with SCAMPER</title>
      <dc:creator>Gianluigi Mucciolo</dc:creator>
      <pubDate>Tue, 12 Mar 2024 18:05:22 +0000</pubDate>
      <link>https://forem.com/aws-builders/partyrock-unleashing-creativity-with-scamper-1c09</link>
      <guid>https://forem.com/aws-builders/partyrock-unleashing-creativity-with-scamper-1c09</guid>
      <description>&lt;p&gt;I recently shared my fascination with PartyRock at an AWS User Group Rome meeting, discussing how I've ventured from RPG simulators to innovative chat applications. Highlighting the potential of generative AI with AWS, I showcased a new PartyRock app, emphasizing prompt engineering for interactive conversations and creative outputs. Catch the session on &lt;a href="https://www.youtube.com/watch?v=3B4WmoCYWss"&gt;YouTube&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This journey inspired my latest project for the PartyRock Hackathon, utilizing the SCAMPER method to spur creativity and innovation among users, challenging conventional thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Innovation from the Ground Up&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;My adventure began with a concept deeply rooted in the SCAMPER technique. This tool was imagined as a digital mentor, guiding users through the seven essential steps of creative problem-solving, pushing them to view challenges from perspectives they hadn't considered before.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Crafting with Cutting-Edge Technology&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon PartyRock became the foundation of my creation, bringing my ideas to life. Through immediate engineering and the use of the Claude model, I integrated the SCAMPER methodology into the core of my application. This combination was a deliberate choice, with the goal of positioning the tool at the intersection of technology and creativity.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Step-by-Step Guide to Unleashing Creativity&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Define the challenge&lt;/strong&gt;: Start with clarity, introducing a section for users to articulate their challenge, laying the groundwork for targeted solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Language Selection&lt;/strong&gt;: Focusing on inclusivity, the tool includes a language selector, allowing users to explore solutions in the language they are most comfortable with. &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uxwr5bk41n9hfpfnkjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uxwr5bk41n9hfpfnkjh.png" alt="Define the problem or challenge clearly" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kickstart with Substitution&lt;/strong&gt;: By activating the "Substitute Enabler," users can begin the substitution phase, examining alternatives and replacements for elements of their challenge, thereby broadening their horizons.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Explore Substitution in Depth&lt;/strong&gt;: The tools includes a widget to help identify substitutable elements, offering guidance and summarizing proposed changes for a comprehensive exploration of potential solutions. &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdaj24dy531dbvsn8z71r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdaj24dy531dbvsn8z71r.png" alt="Explore Substitution" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Navigate through the SCAMPER strategies&lt;/strong&gt;: The tool includes a widget for each SCAMPER strategy: Replace, Combine, Adapt, Modify, Put to Another Use Items, Eliminate, Reverse, to Encourage a thorough investigation of all the possibilities for innovation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reveal the Final Solution&lt;/strong&gt;: The "SCAMPER Result" widget presents an innovative solution or improvement at the end of the journey, demonstrating the effectiveness of structured creativity. &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpv2vir13ee6pa3h5ohf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpv2vir13ee6pa3h5ohf.png" alt="Reveal the Final" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Address Contradictions&lt;/strong&gt;: The "SCAMPER Contradictions" widget ensures the solution's coherence by addressing any contradictions that might emerge during the brainstorming process. &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2teexd2buw4xoku7xr5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2teexd2buw4xoku7xr5n.png" alt="Address Contradictions" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Expand with AI Insights&lt;/strong&gt;: The "Critical AI SCAMPER" widget pushes the boundaries further, offering AI-driven suggestions to refine and enhance the idea.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Navigating Challenges and Learning Along the Way&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A significant challenge was ensuring smooth performance with multiple widgets running at the same time. Introducing an "ON/OFF" feature dramatically improved the user experience. This journey wasn't just about creation; it deepened my understanding of prompt engineering and highlighted the importance of performance optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Road Ahead&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Looking forward, I'm focused on enhancing the tool's performance and functionality. The community's engagement excites me, and I'm keen to integrate their feedback into the continuous development of our tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://partyrock.aws/u/birillo/92GK1x8sK/SCAMPER%3A-A-Powerful-Tool-for-Creative-Problem-Solving"&gt;SCAMPER: A Powerful Tool for Creative Problem-Solving&lt;/a&gt;&lt;/p&gt;

</description>
      <category>partyrock</category>
      <category>beginners</category>
      <category>ai</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building Elastic and Fully Managed Cloud-Native VectorDB Milvus Infrastructure on AWS</title>
      <dc:creator>Gianluigi Mucciolo</dc:creator>
      <pubDate>Tue, 05 Mar 2024 15:51:57 +0000</pubDate>
      <link>https://forem.com/aws-builders/building-elastic-and-fully-managed-cloud-native-milvus-infrastructure-on-aws-4ng7</link>
      <guid>https://forem.com/aws-builders/building-elastic-and-fully-managed-cloud-native-milvus-infrastructure-on-aws-4ng7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Milvus is an advanced open-source vector database designed to revolutionize AI and analytics applications. With its ability to handle high-performance similarity search on massive-scale vector datasets, Milvus has become an essential tool for businesses in today's data-driven world. From recommendation systems to image recognition and natural language processing, Milvus enables organizations to unlock the full potential of generative AI algorithms and extract valuable insights from complex data.&lt;/p&gt;

&lt;p&gt;To fully leverage the power of Milvus, deploying it within a robust, scalable, and managed infrastructure is crucial. In this blog post, I will explore how you can build an elastic and fully managed cloud-native Milvus infrastructure on AWS, taking advantage of its scalability, reliability, and ease of management. By harnessing the capabilities of Milvus in combination with AWS services, businesses can supercharge their generative AI initiatives and achieve remarkable results in fields such as content generation, recommendation engines, and personalized user experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Milvus Architecture
&lt;/h2&gt;

&lt;p&gt;At the core of Milvus lies its shared-storage architecture, consisting of four essential layers: the access layer, coordinator service, worker node, and storage. This architectural design allows for scalability, as well as the disaggregation of storage and computing resources, resulting in a cost-efficient and flexible infrastructure. The independent scalability of each layer further enhances the system's agility and resilience, ensuring seamless disaster recovery.&lt;/p&gt;

&lt;p&gt;Milvus architecture distinctively separates compute resources from storage, incorporating dedicated storage nodes. A significant advantage of this infrastructure is the ability to scale the compute cluster down to zero while maintaining the data nodes active. This setup ensures that data remains accessible in Amazon S3, providing a flexible and efficient way to manage resources without compromising data availability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxo6wwgteagdaby37z63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxo6wwgteagdaby37z63.png" alt="Milvus architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above &lt;a href="https://milvus.io/docs/architecture_overview.md" rel="noopener noreferrer"&gt;Milvus architecture&lt;/a&gt; is from the official documentation. &lt;/p&gt;

&lt;h2&gt;
  
  
  Existing Infrastructure on Kubernetes
&lt;/h2&gt;

&lt;p&gt;The current deployment of Milvus operates on Kubernetes, utilizing components such as etcd for distributed key-value storage, MinIO for object storage, and Pulsar for distributed messaging. While this setup is functional, Milvus architecture is designed to be portable, allowing it to run on various infrastructures. To leverage the benefits of an AWS-native solution and further enhance the deployment, you can introduce updates to leverage Serverless and fully-managed solutions available on AWS.&lt;/p&gt;

&lt;p&gt;Serverless on AWS provides technologies for running code, managing data, and integrating applications without the need to manage servers. It offers automatic scaling, built-in high availability, and a pay-for-use billing model, increasing agility and optimizing costs. By leveraging serverless technologies, you can enhance the scalability and efficiency of the Milvus deployment on AWS.&lt;/p&gt;

&lt;p&gt;AWS fully-managed services, on the other hand, are Amazon's cloud computing provisions where AWS handles the entire infrastructure and manages the required resources to deliver reliable services. This includes managing servers, storage, operating systems, databases, and other critical resources fundamental to the service infrastructure. By utilizing fully-managed services, you can ensure a robust and reliable Milvus deployment on AWS, reducing operational overhead and increasing the focus on utilizing Milvus's capabilities effectively.&lt;/p&gt;

&lt;p&gt;By transitioning the existing Kubernetes deployment of Milvus to leverage serverless and fully-managed solutions on AWS, you can unlock the full potential of Milvus in terms of scalability, reliability, and ease of management. In the next sections, I will explore the proposed infrastructure using AWS services and its benefits in building an elastic and fully managed cloud-native Milvus infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposed Infrastructure with AWS Services
&lt;/h2&gt;

&lt;p&gt;To enhance the Milvus deployment on AWS, I propose replacing certain components with AWS services that offer scalability, reliability, and ease of management. These replacements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MSK (Managed Service for Apache Kafka)&lt;/strong&gt;: MSK replaces Pulsar for messaging and log management. It provides a fully managed Kafka service that ensures robust messaging and log processing, allowing for seamless integration into your Milvus deployment. For future exploration, it is worthwhile to consider utilizing AWS Kinesis, a fully managed streaming service that offers seamless integration with the AWS ecosystem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aurora Serverless&lt;/strong&gt;: Aurora Serverless replaces etcd as the metadata storage and coordination system. It offers a serverless database service that automatically scales to match workload demands. With Aurora Serverless, you can ensure efficient and scalable management of metadata in your Milvus infrastructure. Currently Milvus only supports MySQL, but as an alternative metastore, it is also worth exploring the use of AWS DynamoDB, a highly scalable NoSQL database optimized for key-value workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt;: ALB handles load balancing and routing of Milvus requests, ensuring high availability and efficient distribution of traffic to the various components. ALB's dynamic routing capabilities enable seamless traffic management within the Milvus infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon S3&lt;/strong&gt;: Amazon S3 replaces MinIO for data persistence. It offers highly scalable, reliable, and cost-effective object storage. By leveraging Amazon S3, you can achieve seamless data persistence for your Milvus deployment, while benefiting from the scalability and durability of AWS's object storage service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon ECS&lt;/strong&gt;: Milvus containers can be effortlessly deployed on AWS Fargate, a serverless compute engine specifically designed for containers. By utilizing ECS Fargate, you liberate yourself from the complexities of managing underlying infrastructure, enabling you to devote your attention to fine-tuning resource utilization and elevating the performance of your Milvus deployment. For future explorations, you can draw inspiration from the design considerations of Aurora Serverless for high throughput cloud-native vector databases. This involves separating storage and computation, ensuring that you only pay for computational power when it is actually needed, resulting in optimized cost efficiency and enhanced scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Cloud Map&lt;/strong&gt;: Milvus distributed infrastructure requires effective service discovery mechanisms to enable efficient management and scaling of applications. With AWS Cloud Map, you can easily locate and communicate with the services you need, without the hassle of managing your own service registry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By incorporating these AWS services and considering future possibilities, you can build an elastic and fully managed cloud-native Milvus infrastructure that maximizes scalability, reliability, and operational efficiency. In the next sections, I will delve into the architecture of this new infrastructure and explore its benefits in detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrr1i7cljpjzyav3jl08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrr1i7cljpjzyav3jl08.png" alt="Architecture of the New Infrastructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture of the New Infrastructure
&lt;/h2&gt;

&lt;p&gt;In the proposed infrastructure, AWS services seamlessly integrate into the Milvus deployment, enhancing scalability, manageability, and overall performance. MSK, Aurora Serverless, ALB, Amazon S3, and ECS Fargate play pivotal roles in ensuring a robust and elastic infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of AWS Services for Milvus
&lt;/h2&gt;

&lt;p&gt;The adoption of AWS services brings several key advantages to Milvus deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: AWS services such as MSK, Aurora Serverless, and ECS Fargate enable effortless scaling of resources based on workload demands. This ensures efficient management of high-volume data, allowing your Milvus deployment to handle growing datasets with ease.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed Services&lt;/strong&gt;: By leveraging managed services, you can significantly reduce operational overhead. AWS takes care of the underlying infrastructure, ensuring high availability and durability. This allows you to focus on leveraging Milvus's capabilities without the burden of managing the infrastructure yourself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reliability&lt;/strong&gt;: AWS services provide a robust and reliable infrastructure, offering stability and performance for your Milvus deployment. With built-in redundancy and fault-tolerant designs, you can trust that your Milvus infrastructure will operate smoothly and reliably.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency&lt;/strong&gt;: AWS services offer cost-effective solutions for Milvus deployments. Services like Aurora Serverless and ECS Fargate enable you to pay only for the computational resources you actually use, optimizing cost efficiency. Additionally, Amazon S3 provides highly scalable and cost-effective object storage, eliminating the need for managing and provisioning your own storage infrastructure. By leveraging AWS services, you can achieve significant cost savings while maintaining the scalability and reliability required for your Milvus deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By incorporating these benefits into your Milvus deployment, you can unleash the full potential of Milvus for high-performance similarity search on massive-scale vector datasets, while ensuring scalability, reliability, and cost efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
   Deployment Process on AWS and Challenges
&lt;/h2&gt;

&lt;p&gt;I began by following the official instructions currently available in the documentation, which presents two options for deploying Milvus in Kubernetes: using Terraform and Ansible, or employing Docker Compose (not recommended for production environments). Initially, I opted for Docker Compose and attempted to deploy it on Amazon ECS using the ecs-cli. However, I encountered several incompatibilities and, after many hours of effort, decided to abandon Docker Compose. Despite this setback, the experience proved to be invaluable, as it greatly enhanced my understanding of both ecs-cli and Milvus' internal architecture.&lt;/p&gt;

&lt;p&gt;Consequently, I decided to build the entire infrastructure from scratch. Given my previous experience, this approach seemed far simpler to manage. I began by deploying the Virtual Private Cloud (VPC), the ECS cluster, and then proceeded to install each of the Milvus components individually. During this process, Milvus introduced support for multiple coordinators in both active and standby modes, further complicating deployment, but in a more exciting way.&lt;/p&gt;

&lt;p&gt;One of the most significant challenge I faced—and continue to face—is related to ETCD. As you may know, ETCD utilizes the Raft protocol, enabling a cluster of nodes to maintain a replicated state machine. I managed to deploy a single ETCD node in ECS, but to get Raft working, I had to implement several workarounds, such as assigning task names using tags. While not ideal, it was the only viable solution, particularly since ECS does not yet support StatefulSets.&lt;/p&gt;

&lt;p&gt;Currently, I have a functioning cluster with ETCD that lacks high availability. If you have any suggestions on how to enhance the architecture, or if you're interested in collaborating on this project, your participation would be greatly appreciated.&lt;/p&gt;

&lt;p&gt;Additionally, if you're willing, please consider helping to make the StatefulSets feature available on ECS by supporting this request: &lt;a href="https://github.com/aws/containers-roadmap/issues/127" rel="noopener noreferrer"&gt;https://github.com/aws/containers-roadmap/issues/127&lt;/a&gt; . 🙏&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying Milvus on AWS using managed services like MSK, Aurora Serverless, ALB, Amazon S3, and ECS Fargate offers significant benefits in terms of scalability, reliability, and ease of management. By adopting this infrastructure, businesses can unlock the full potential of Milvus for high-performance similarity search on massive-scale vector datasets. With AWS services, you can build an elastic and fully managed cloud-native Milvus infrastructure that can handle the most demanding AI and analytics workloads.&lt;/p&gt;

</description>
      <category>database</category>
      <category>architecture</category>
      <category>networking</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless Apache Zeppelin on AWS</title>
      <dc:creator>Gianluigi Mucciolo</dc:creator>
      <pubDate>Sun, 04 Feb 2024 12:20:04 +0000</pubDate>
      <link>https://forem.com/aws-builders/serverless-apache-zeppelin-on-aws-4m52</link>
      <guid>https://forem.com/aws-builders/serverless-apache-zeppelin-on-aws-4m52</guid>
      <description>&lt;h2&gt;
  
  
  What is Apache Zeppelin?
&lt;/h2&gt;

&lt;p&gt;First of all, it is worth asking: what is a notebook interface? A notebook is an interface for interactively running code; it lets you explore and visualize data. You can mix narrative, rich media, and data in a unique space.&lt;/p&gt;

&lt;p&gt;Now we can proceed with the definition of &lt;a href="https://zeppelin.apache.org/" rel="noopener noreferrer"&gt;Apache Zeppelin&lt;/a&gt;. It is a web-based notebook that enables data-driven, interactive data analytics and collaborative documents with Python, Scala, SQL, Spark, and more. You can execute code and even schedule a job (via cron) to run at regular intervals.&lt;/p&gt;

&lt;p&gt;It's easier to mix languages in the same notebook. You can write some code and then use markdown to document it all together. You can also easily convert your notebook into a presentation style - perhaps for presenting to management or using dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does Serverless Means?
&lt;/h2&gt;

&lt;p&gt;The idea behind serverless is that you as a developer shouldn't need to care about the server infrastructure. You pay to run the code without concerns about what type of physical infrastructure is running below.&lt;/p&gt;

&lt;p&gt;There are quite a few advantages to serverless. Scalability essentially comes for free. Because you're just paying to run logic, the cloud provider can easily dedicate more hardware to run your code. Also, you pay by code execution rather than having a fixed rate. Even more, the cloud provider manages the server software and hardware. You shouldn't need to worry about that. Finally, serverless frees up developers to focus on what they're good at - coding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfdvarxe10n5fwrpfj1s.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfdvarxe10n5fwrpfj1s.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Requirements
&lt;/h2&gt;

&lt;p&gt;Build a serverless infrastructure to run Apache Zeppelin and persist notebook files. The solution must be publicly available and provide login and logout capability. Also, the compute platform must automatically shut down after 30 minutes of inactivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  High-level Architecture
&lt;/h2&gt;

&lt;p&gt;The diagram below shows the high-level architecture. As you can see, it is a serverless infrastructure, and you can operate Apache Zeppelin using a public endpoint while Elastic File System stores the notebook files. Amazon CloudWatch custom metric counts the lines of logs and shuts down the Amazon Fargate container after 30 minutes of inactivity.&lt;/p&gt;

&lt;p&gt;The only missing feature in this architecture is the login and logout capability. In this case, Apache Zeppelin provides Shiro for notebook authentication. &lt;a href="https://shiro.apache.org/" rel="noopener noreferrer"&gt;Apache Shiro&lt;/a&gt; is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. &lt;a href="https://zeppelin.apache.org/docs/0.8.0/setup/security/shiro_authentication.html#4-login" rel="noopener noreferrer"&gt;Here&lt;/a&gt;, you can find a step-by-step guide about how Shiro works. This example uses the default configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lg35thqtlwl6d215d0w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lg35thqtlwl6d215d0w.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code Description
&lt;/h2&gt;

&lt;p&gt;The solution uses &lt;a href="https://aws.amazon.com/serverless/sam/" rel="noopener noreferrer"&gt;AWS SAM&lt;/a&gt; with the global configuration for Lambda functions and the public API you can use to access Apache Zeppelin. The stack deployment provides the URL as an output value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon API Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon API Gateway is used as the front door to interact with the application; it exposes the URL the user can use to trigger operations and use Serverless Apache Zeppelin.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2010-09-09'&lt;/span&gt;
&lt;span class="na"&gt;Globals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Function&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
    &lt;span class="na"&gt;MemorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;
    &lt;span class="na"&gt;Architectures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
&lt;span class="na"&gt;Parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ZeppelinApi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Serverless::Api&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;StageName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ServiceName&lt;/span&gt;

&lt;span class="na"&gt;Outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;ZeppelinApi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;API&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Gateway&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;endpoint&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;URL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Prod&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;stage&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hello&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;World&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;function"&lt;/span&gt;
     &lt;span class="na"&gt;Value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://${ZeppelinApi}.execute-api.${AWS::Region}.amazonaws.com/${ServiceName}/"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Elastic File System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When provisioned, each Amazon ECS task hosted on AWS Fargate receives ephemeral storage for bind mounts; everything on the disk is lost after container termination. To persist notebook files, the solution uses Amazon Elastic File System; all notebooks on EFS are preserved after container termination. The Access Point configuration allows Apache Zeppelin to have write permissions on Amazon Elastic File System.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;


&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2010-09-09'&lt;/span&gt;
&lt;span class="na"&gt;Globals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Function&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
    &lt;span class="na"&gt;MemorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;
    &lt;span class="na"&gt;Architectures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
&lt;span class="na"&gt;Parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ZeppelinApi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;AccessPoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;AWS::EFS::AccessPoint'&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;FileSystemId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;FileSystem&lt;/span&gt;
      &lt;span class="na"&gt;PosixUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500"&lt;/span&gt;
        &lt;span class="na"&gt;Gid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500"&lt;/span&gt;
        &lt;span class="na"&gt;SecondaryGids&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2000"&lt;/span&gt;
      &lt;span class="na"&gt;RootDirectory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;CreationInfo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;OwnerGid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500"&lt;/span&gt;
          &lt;span class="na"&gt;OwnerUid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500"&lt;/span&gt;
          &lt;span class="na"&gt;Permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0777"&lt;/span&gt;
        &lt;span class="na"&gt;Path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/${ServiceName}"&lt;/span&gt;
  &lt;span class="na"&gt;FileSystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::EFS::FileSystem&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;PerformanceMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;generalPurpose&lt;/span&gt;
      &lt;span class="na"&gt;FileSystemTags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceName&lt;/span&gt;
        &lt;span class="na"&gt;Value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ServiceName&lt;/span&gt;
  &lt;span class="na"&gt;MountTarget1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Availability Zone A Configuration&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;MountTarget2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Availability Zone B Configuration&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;MountTarget3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Availability Zone C Configuration&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Amazon Cloud Watch Custom Metric&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To provide an auto-shutdown feature, the Apache Serverless solution uses a custom metric. AWS Fargate saves logs into an Amazon CloudWatch Log Group, and the Amazon CloudWatch Custom Metric Filter counts the log lines. If the custom metric is zero for about 30 minutes, the alarm publishes a message to Amazon Simple Notification Service to terminate the Task. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2010-09-09'&lt;/span&gt;
&lt;span class="na"&gt;Globals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Function&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
    &lt;span class="na"&gt;MemorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;
    &lt;span class="na"&gt;Architectures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
&lt;span class="na"&gt;Parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ZeppelinApi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;AccessPoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;FileSystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;ShutdownSnsTopic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;description later in this post&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;ZeppelinLogGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Logs::LogGroup&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;LogGroupName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/ecs/fargate-${ServiceName}"&lt;/span&gt;
      &lt;span class="na"&gt;RetentionInDays&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;ActivityMetricFilter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Logs::MetricFilter&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;LogGroupName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ZeppelinLogGroup&lt;/span&gt;
      &lt;span class="na"&gt;FilterPattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO"&lt;/span&gt;
      &lt;span class="na"&gt;MetricTransformations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
        &lt;span class="pi"&gt;-&lt;/span&gt; 
          &lt;span class="na"&gt;MetricValue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
          &lt;span class="na"&gt;MetricNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${ServiceName}/Actions"&lt;/span&gt;
          &lt;span class="na"&gt;MetricName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ActionsCount"&lt;/span&gt;
  &lt;span class="na"&gt;ZeppelinActionsCountAlarm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::CloudWatch::Alarm&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;AlarmName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ZeppelinActionsCountAlarm&lt;/span&gt;
      &lt;span class="na"&gt;MetricName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ActionsCount&lt;/span&gt;
      &lt;span class="na"&gt;Namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${ServiceName}/Actions"&lt;/span&gt;
      &lt;span class="na"&gt;Statistic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SampleCount&lt;/span&gt;
      &lt;span class="na"&gt;Period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;300'&lt;/span&gt;
      &lt;span class="na"&gt;EvaluationPeriods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;6'&lt;/span&gt;
      &lt;span class="na"&gt;TreatMissingData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;breaching&lt;/span&gt;
      &lt;span class="na"&gt;Threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
      &lt;span class="na"&gt;ComparisonOperator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LessThanOrEqualToThreshold&lt;/span&gt;
      &lt;span class="na"&gt;AlarmActions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ShutdownSnsTopic&lt;/span&gt; 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;AWS Fargate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is the AWS Fargate Cluster and Task Definition. The Apache Serverless solution uses Shiro to enable login and logout capability. As stated here, you can create a shiro.ini file by executing the cp command. You can find it in the EntryPoint property of the container definition.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2010-09-09'&lt;/span&gt;&lt;br&gt;
&lt;span class="na"&gt;Globals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;Function&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;Timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;MemorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;Architectures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;br&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;&lt;br&gt;
&lt;span class="na"&gt;Parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
  &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;ZeppelinApi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;AccessPoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;FileSystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
    &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;ZeppelinLogGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class="na"&gt;Cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ECS::Cluster&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;ClusterName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Join&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="nv"&gt;ServiceName&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;Cluster&lt;/span&gt;&lt;span class="pi"&gt;]]&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;ZeppelinTaskDefinition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ECS::TaskDefinition&lt;/span&gt;&lt;br&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;RequiresCompatibilities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FARGATE"&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;Cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ContainerCPU&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;Memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;MemoryHardLimit&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;NetworkMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;awsvpc"&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;TaskRoleArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;ZeppelinTaskRole.Arn&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;ExecutionRoleArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;ZeppelinTaskRole.Arn&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;ContainerDefinitions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ServiceName&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;Image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apache/zeppelin:0.10.0"&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;EntryPoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;br&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;&lt;br&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;&lt;br&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;&lt;br&gt;
              &lt;span class="s"&gt;cp conf/shiro.ini.template conf/shiro.ini &lt;/span&gt;&lt;br&gt;
              &lt;span class="s"&gt;/usr/bin/tini -- bin/zeppelin.sh&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;Command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;done!"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;MemoryReservation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;MemorySoftLimit&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;Memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;MemoryHardLimit&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;PortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ContainerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ContainerPort&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;Protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;&lt;br&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ContainerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4040&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;Protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;LogConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
            &lt;span class="na"&gt;LogDriver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;awslogs&lt;/span&gt;&lt;br&gt;
            &lt;span class="na"&gt;Options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;awslogs-group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ZeppelinLogGroup&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;awslogs-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;AWS::Region&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;awslogs-stream-prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ecs-${ServiceName}-awsvpc'&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;MountPoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ContainerPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ZeppelinPersistNotebookPath&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;SourceVolume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${ServiceName}"&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;ReadOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;br&gt;
      &lt;span class="na"&gt;Volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${ServiceName}"&lt;/span&gt;&lt;br&gt;
          &lt;span class="na"&gt;EFSVolumeConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
            &lt;span class="na"&gt;AuthorizationConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;br&gt;
              &lt;span class="na"&gt;IAM&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENABLED&lt;/span&gt;&lt;br&gt;
              &lt;span class="na"&gt;AccessPointId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;AccessPoint&lt;/span&gt;&lt;br&gt;
            &lt;span class="na"&gt;FilesystemId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;FileSystem&lt;/span&gt;&lt;br&gt;
            &lt;span class="na"&gt;TransitEncryption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENABLED&lt;/span&gt;   &lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  AWS Lambda | Workflow&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Below is the high-level workflow about how the implementation works, how the task is created, and shut down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fughbdd54pxlcm2bxv0sd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fughbdd54pxlcm2bxv0sd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start Apache Serverless&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the beginning, it checks if the Apache Zeppelin Container is running.&lt;/p&gt;

&lt;p&gt;In case of a yes, AWS Lambda returns 302 to the Apache Zeppelin public IP. In case of a no, AWS Lambda executes the next step. Then, it checks if the Apache Zeppelin Container exists.&lt;/p&gt;

&lt;p&gt;In case of a yes, AWS Lambda returns static web content. It is a loading page with an auto-refresh every 20 seconds. In case of a no, AWS Lambda starts a new Apache Zeppelin container and returns the loading page. Every 20 seconds, the client checks Apache Zeppelin provisioning and gets the notebook interface if the container is running; otherwise, it gets the loading page. When you have the notebook interface, to use Apache Zeppelin, you must provide your user credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shutdown Apache Serverless&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the custom metric is zero for about 30 minutes, the alarm publishes a message to Amazon Simple Notification Service, and an AWS Lambda Function terminates the cluster. The Amazon Simple Notification Service is the AWS Lambda Function trigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage Suggestions &amp;amp; Improvements
&lt;/h2&gt;

&lt;p&gt;Apache Zeppelin supports Amazon S3 for persisting notebook files. As stated &lt;a href="https://www.imperva.com/blog/install-apache-zeppelin-and-connect-it-to-aws-athena-for-data-exploration-visualization-and-collaboration/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, you can use ZEPPELIN_NOTEBOOK_STORAGE, ZEPPELIN_NOTEBOOK_S3_BUCKET, and ZEPPELIN_NOTEBOOK_S3_USER as environment variables.&lt;/p&gt;

&lt;p&gt;On the other hand, Amazon Elastic File System offers a very generic solution that can be used for various purposes; the only limit is your imagination. Since Amazon EFS is a file system, you don't have to deal with Amazon S3 Object Storage. In this case, you can simply upload your application to a Docker container and run it on AWS Fargate, just by replacing Apache Zeppelin.&lt;/p&gt;

&lt;p&gt;For example, you can run Serverless Visual Studio Code; check the container &lt;a href="https://blog.ruanbekker.com/blog/2019/09/14/running-vs-code-in-your-browser-with-docker/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another improvement related to Serverless Apache Zeppelin on AWS is configuring Amazon DynamoDB as an external database for Shiro users.&lt;/p&gt;

&lt;p&gt;What will be your next application to deploy as Serverless?&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>tutorial</category>
      <category>aws</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>Innovative announcements from Amazon Bedrock at re:Invent: A turning point in the AI industry</title>
      <dc:creator>Gianluigi Mucciolo</dc:creator>
      <pubDate>Sat, 02 Dec 2023 00:00:48 +0000</pubDate>
      <link>https://forem.com/aws-builders/innovative-announcements-from-amazon-bedrock-at-reinvent-a-turning-point-in-the-ai-industry-leo</link>
      <guid>https://forem.com/aws-builders/innovative-announcements-from-amazon-bedrock-at-reinvent-a-turning-point-in-the-ai-industry-leo</guid>
      <description>&lt;p&gt;In 2023, there have been numerous developments in the field of artificial intelligence, characterized by the introduction of new libraries, tools, models and benchmark architectures. Among these, the recent announcements by Amazon re:Invent represent a significant turning point, bringing clarity and structure to a previously chaotic field. This development makes it possible to focus on the "important things": It's not just about knowing the tools and models, but also about creating a solid data foundation for further developments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y6ms1gt2qajsftc1v0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y6ms1gt2qajsftc1v0d.png" alt="A modern, high-tech office space, featuring a large central holographic display with an interconnected network of nodes and data streams, symbolizing a well-structured and efficient AI system. The office includes futuristic interfaces for interaction, spacious workstations, and is adorned with green plants, creating an environment that's both technologically advanced and welcoming." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Bedrock and its development:
&lt;/h2&gt;

&lt;p&gt;In the landscape of artificial intelligence, the recent developments of &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; represent a qualitative leap that brings clarity and organization to a field that was previously considered disorganized. At the heart of this innovation is the fully managed &lt;strong&gt;Retrieval Augmented Generation (RAG) solution with knowledge bases&lt;/strong&gt;, which emphasizes the importance of data over algorithms and shifts the focus from simply knowing about tools and models to building robust data foundations. This approach puts an end to tinkering in the field of AI and allows a focus on more important aspects: &lt;strong&gt;Data!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The latest additions to Amazon Bedrock, such as personalization, agents and guardrails, extend this focus even further. &lt;strong&gt;Personalization&lt;/strong&gt; allows AI models to be tailored to a company's unique style, a model evaluation that helps you choose the best model for your needs, while &lt;strong&gt;Agents&lt;/strong&gt; simplify business tasks such as order management or customer support. &lt;strong&gt;Guardrails&lt;/strong&gt;, on the other hand, standardize security controls and ensure a consistent and secure user experience. These features integrate and extend RAG, contributing to a robust and versatile AI ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Q and integration with VSCode:
&lt;/h2&gt;

&lt;p&gt;The launch of &lt;strong&gt;Amazon Q&lt;/strong&gt; represents a significant advancement in the field of artificial intelligence, particularly in the context of the workplace. This generative AI assistant can be customized to specific business needs and, thanks to the ability to connect and external KB, can be effectively integrated into existing information systems and, in the future, into any AWS service. In line with industry trends where companies like Microsoft have already integrated similar tools into their operating systems and applications, Amazon Q represents another step towards effective synergies between data and humans.&lt;/p&gt;

&lt;p&gt;Moreover, its integration with &lt;strong&gt;VSCode&lt;/strong&gt; opens up new perspectives, especially when managing AWS-related tasks directly from the IDE. This integration is characterized by the fact that the length of the prompt is different between the console (1000 characters) and the IDE (4000 characters), which allows for greater flexibility. Currently, Amazon Q cannot access the text that is currently being worked on. I hope that this limitation will be overcome in the future to achieve greater synergy with Codewhisperer. The latter could significantly improve the generated code, especially if the possibility of a dialog with Amazon Q to refine the code generated by Codewhisperer is implemented, facilitating problem solving and code optimization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb3qigz01ujmbsdpxq27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb3qigz01ujmbsdpxq27.png" alt="A sleek, innovative command center equipped with advanced AI systems. The room features multiple high-resolution screens displaying real-time data analytics and AI models through elegant and clear visualizations. The environment is futuristic, well-organized, and showcases cutting-edge technology and efficiency, with an ambiance that’s both professional and forward-thinking." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts:
&lt;/h2&gt;

&lt;p&gt;Recent announcements in the field of artificial intelligence show that the use of these technologies is maturing. Although there are still pending expectations, such as the introduction of a vector database in Redshift, I remain optimistic about future innovations. A vector database in Redshift would be particularly useful for projects such as &lt;a href="https://memgpt.ai/"&gt;https://memgpt.ai/&lt;/a&gt;, which proposes to manage virtual contexts, taking inspiration from the hierarchical storage systems of traditional operating systems. This approach creates the illusion of large memory resources by moving data back and forth between fast and slow memory. This technology, in combination with Claude 2.1 200k, could be a big leap forward. I had hoped that OpenSearch, which is currently getting a lot of attention, combined with Amazon S3 (UltraWarm storage) would fill the gap left by the lack of a vector database in Redshift, but given the current limitations in approximate k-NN functions, this is not possible. I hope that these technological gaps will be closed in the future to further improve the capabilities and efficiency of AI-based systems.&lt;/p&gt;

&lt;p&gt;Images by Titan Image Generator G1.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>awsdatalake</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
