<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Cyril Bandolo</title>
    <description>The latest articles on Forem by Cyril Bandolo (@bandolocyril).</description>
    <link>https://forem.com/bandolocyril</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bandolocyril"/>
    <language>en</language>
    <item>
      <title>Building AI-Powered Threat Detection with AWS Bedrock and Pinecone</title>
      <dc:creator>Cyril Bandolo</dc:creator>
      <pubDate>Mon, 15 Sep 2025 13:07:30 +0000</pubDate>
      <link>https://forem.com/bandolocyril/building-ai-powered-threat-detection-with-aws-bedrock-and-pinecone-3egh</link>
      <guid>https://forem.com/bandolocyril/building-ai-powered-threat-detection-with-aws-bedrock-and-pinecone-3egh</guid>
      <description>&lt;p&gt;&lt;strong&gt;How I built a production-ready threat detection system that analyzes honeypot attacks in real-time with 90/100 accuracy scores.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo :&lt;/strong&gt;  &lt;a href="https://www.youtube.com/watch?v=ZTbJbibylAc" rel="noopener noreferrer"&gt;Watch the 5-Minute Video&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Code :&lt;/strong&gt; &lt;a href="https://github.com/Bandolo/threat-detect?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;View the GitHub Repository&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;Security teams mostly use manual threat analysis, but the issue is that this doesn't scale, and these traditional rule-based systems often miss sophisticated attacks.&lt;/p&gt;

&lt;p&gt;For these reasons, I wanted to create a system that could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Analyze threats in real-time (&amp;lt; 3 seconds)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn from historical attack patterns using AI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scale automatically with a serverless architecture&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deliver actionable intelligence, not just static alerts &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Watch the 5-minute demo&lt;/strong&gt;: &lt;a href="https://youtu.be/ZTbJbibylAc" rel="noopener noreferrer"&gt;https://youtu.be/ZTbJbibylAc&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Here is the highlevel pipeline:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Honeypot → S3 → Lambda → Bedrock (Claude) → Pinecone + DynamoDB + SNS&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Why this Design?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event-Driven Processing&lt;/strong&gt;: Lambda triggers only when new logs arrive in S3, leading to zero idle costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI-First Analysis&lt;/strong&gt;: AWS Bedrock with Claude provides sophisticated threat analysis that understands context, not just patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vector Search&lt;/strong&gt;: Pinecone enables similarity checks across past attacks to detect campaigns or repeat patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-Storage Strategy&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DynamoDB for structured queries&lt;/li&gt;
&lt;li&gt;Pinecone for semantic search&lt;/li&gt;
&lt;li&gt;S3 for storing raw logs data.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Implementation Highlights
&lt;/h2&gt;
&lt;h3&gt;
  
  
  AWS Bedrock Integration
&lt;/h3&gt;

&lt;p&gt;A structured prompt ensures consistent, parseable AI outputs, as seen below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python
def invoke_bedrock(logs):
    prompt_text = f"""Analyze this security log and provide a threat assessment.

Log data: {json.dumps(log_data, indent=2)}

Provide your response in this format:
Threat Score: [0-100]
Threat Label: [threat type]
Explanation: [brief explanation]"""
    payload = {
        "prompt": f"\\n\\nHuman: {prompt_text}\\n\\nAssistant:",
        "max_tokens_to_sample": MAX_TOKENS,
        "temperature": 0.1,
        "top_p": 0.9
    } 
    resp = client.invoke_model(
        modelId="anthropic.claude-v2",
        body=json.dumps(payload).encode(),
        contentType="application/json"
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Handling JSONL Format
&lt;/h3&gt;

&lt;p&gt;Cowrie honeypot logs are stored in JSONL format (multiple JSON objects per line). I wrote a parser to efficiently process multiple JSON objects per line and handle edge cases cleanly.&lt;/p&gt;

&lt;p&gt;Below is part of the code, and you can see the &lt;a href="https://github.com/Bandolo/threat-detect?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; for the full implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python
def parse_jsonl(body):
    logs = []
    for line in body.strip().split('\\n'):
        if line.strip():
            remaining = line.strip()
            while remaining:
                try:
                    obj, idx = json.JSONDecoder().raw_decode(remaining)
                    logs.append(obj)
                    remaining = remaining[idx:].strip()
                except json.JSONDecodeError:
                    break
    return logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Vector Embeddings
&lt;/h3&gt;

&lt;p&gt;For scalable similarity queries, I generated deterministic 1536-dimension embeddings and stored them in Pinecone for lightning-fast lookups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python
def invoke_embedding(log):
    text = json.dumps(log, separators=(',', ':'))
    hash_obj = hashlib.sha256(text.encode())
    hash_bytes = hash_obj.digest()
    embedding = []
    for i in range(0, len(hash_bytes), 2):
        if i+1 &amp;lt; len(hash_bytes):
            val = (hash_bytes[i] * 256 + hash_bytes[i+1]) / 65535.0
            embedding.append(val)

    # Pad to 1536 dimensions for Pinecone
    while len(embedding) &amp;lt; 1536:
        embedding.extend(embedding[:min(len(embedding), 1536-len(embedding))])
    return embedding[:1536]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Production Considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cost Optimization
&lt;/h3&gt;

&lt;p&gt;AWS Bedrock charges per token, so I implemented cost tracking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python
def estimate_cost(input_tokens, output_tokens):
    INPUT_RATE = 0.0008   # per 1K tokens
    OUTPUT_RATE = 0.0016  # per 1K tokens
    return (input_tokens / 1000)  INPUT_RATE + (output_tokens / 1000)  OUTPUT_RATE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Monitoring &amp;amp; Alerting
&lt;/h3&gt;

&lt;p&gt;CloudWatch dashboard tracks the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Invocation times&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Estimated cost per analysis&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error rates&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Error Handling
&lt;/h3&gt;

&lt;p&gt;Fallback logic ensures high availability: if Bedrock is unavailable, a lightweight rules-based classifier kicks in, preventing processing delays or cost spikes&lt;/p&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;After 2 weeks of development and testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Detection Accuracy: 90/100 in malware execution scenarios&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Response Time: &amp;lt;3 seconds end-to-end processing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: Serverless architecture handles traffic spikes seamlessly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost Efficiency: ~$0.01 per processed threat&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prompting Engineering is Everything&lt;/strong&gt;: Well-structured prompts drive consistent, high-quality AI outputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Handling Saves Money&lt;/strong&gt;: Lambda timeouts and Bedrock failures can be expensive. Robust error handling with fallbacks keeps costs predictable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing Early and Often&lt;/strong&gt;: Comprehensive unit and integration tests caught subtle bugs before production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor from Day One&lt;/strong&gt;: CloudWatch metrics provided visibility that helped optimize performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Future enhancements I'm considering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Training a custom ML model with historical attack data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building a real-time SOC dashboard&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrating additional log sources and formats&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automated response actions, such as dynamic IP blocking&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The complete source code is available on GitHub: &lt;a href="https://github.com/Bandolo/threat-detect" rel="noopener noreferrer"&gt;https://github.com/Bandolo/threat-detect&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Watch the demo: &lt;a href="https://youtu.be/ZTbJbibylAc" rel="noopener noreferrer"&gt;https://youtu.be/ZTbJbibylAc&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect
&lt;/h2&gt;

&lt;p&gt;Interested in AI applications in cybersecurity?&lt;/p&gt;

&lt;p&gt;Connect with me on &lt;a href="https://www.linkedin.com/in/cyrilbandolo/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; or join the &lt;a href="https://www.meetup.com/aws-london-on-user-group/?eventOrigin=home_groups_you_organize" rel="noopener noreferrer"&gt;AWS User Group London, Ontario&lt;/a&gt;, to talk about GenAI, serverless, and modern threat detection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built with AWS Bedrock, Pinecone, Lambda, and a passion for solving real security challenges.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>bedrock</category>
      <category>pinecone</category>
      <category>genai</category>
    </item>
    <item>
      <title>Year 2022: Stay Hungry, Stay Foolish</title>
      <dc:creator>Cyril Bandolo</dc:creator>
      <pubDate>Sat, 31 Dec 2022 19:04:18 +0000</pubDate>
      <link>https://forem.com/bandolocyril/year-2022-stay-hungry-stay-foolish-1a7</link>
      <guid>https://forem.com/bandolocyril/year-2022-stay-hungry-stay-foolish-1a7</guid>
      <description>&lt;p&gt;When I look back at the Year 2022, I am really short of words as it proves that we should never give up on our dreams. I also understood that it is so easy to underestimate what we can do in one year. Or even how your life could be could be completely transformed in just the last 02 quarters of a year, taking you to places and making you meet with people you never thought you would happen in a decade.&lt;/p&gt;

&lt;p&gt;Being nominated as the first ever &lt;strong&gt;AWS Machine Learning Hero&lt;/strong&gt; in &lt;strong&gt;Sub Saharan Africa&lt;/strong&gt; towards the 3rd quarter of this year has been the biggest blessing I received this year.&lt;/p&gt;

&lt;p&gt;After joining the &lt;strong&gt;AWS Community Builders program&lt;/strong&gt; for just a couple of months, I  had to leave since I became a Hero. Though it appears to be the biggest achievement, but this is not all I am grateful for in 2022. I feel like words can’t describe the psychological and emotional achievements I received in 2022 and how life would probably not be the same in the coming years.&lt;/p&gt;

&lt;p&gt;Compared to other countries in Africa, Cameroon is at the bottom of the list in terms of technology, yet it been able to produce 02 AWS Heroes in the last 02 years. It came to confirm the fact that we all have so much potential we can always offer, no matter where we come from in this world. &lt;/p&gt;

&lt;p&gt;It was a great honor joining the AWS Community Builders program earlier this year, in which I met like-minded persons who taught me how to learn and how to share, and I am forever grateful for the brief moment I spent with all in the program.&lt;br&gt;
I understood why most of the Heroes come from this program.&lt;/p&gt;

&lt;p&gt;As an AWS Hero, I had the opportunity to attend re:Invent this year, in person, for the first time, which was a big also a huge life changing experience for me. One which crowned all the achievements of this year and is still having impact in my life as I write this post. Those kind of experiences, which can’t fully be explained with words, especially as ‘what happens in Las Vegas only ever happens there or simply stays in Las Vegas’.&lt;/p&gt;

&lt;p&gt;I traveled for more than 17 hours all the way from Africa to attend this massive event, with it being my first time visiting the legendary and extravagantly-loud Las Vegas. Behold, during  the keynote of Werner Vogel, Amazon’s CTO about thanking heroes for their accomplishments, I saw my profile being projected in the big screen along with 7 other heroes, which made me almost burst into tears. Again what a psychological victory and a boost&lt;br&gt;
of confidence, seeing  that the little I have been doing, from my small corner, is being recognized on big screen by the Boss. Feels like a salary raise right.&lt;/p&gt;

&lt;p&gt;Contributing as one of the experts in the &lt;strong&gt;PeerTalk Expert&lt;/strong&gt; program, granting one-to-one interviews to amazing persons,&lt;br&gt;
some of whom wanted to know about machine learning, serverless computing, and some just about how to become an AWS Hero, felt really good to be a  part other people’s lives.&lt;/p&gt;

&lt;p&gt;Hosting the first ever &lt;strong&gt;Hero-to-Hero interviews&lt;/strong&gt; along with other exceptionally smart hosts, created by&lt;br&gt;
the big &lt;strong&gt;Mark Pergola&lt;/strong&gt; , reminded me of how we all have our inner geniuses waiting to be activated, and the truth that diversity and synergy is what would take us all to the next level.&lt;/p&gt;

&lt;p&gt;Meeting one of my mentors, &lt;strong&gt;Kesha Williams,&lt;/strong&gt; for the first time, this year, and all the amazing people in-person, including the legendary &lt;strong&gt;Jeff Bar&lt;/strong&gt; are things you can’t easily do in one year. And of course, I could not miss visiting the spot where &lt;strong&gt;Tupac ,&lt;/strong&gt; the rapper, **** was shot in Las Vegas(more of this in my ‘how I rocked Las Vegas post’).&lt;/p&gt;

&lt;p&gt;Now, apart from these freshly burning memories from re:Invent 2022, I am also blessed to have landed a remote &lt;strong&gt;Cloud Engineering Consultant&lt;/strong&gt; role  at &lt;strong&gt;Serverless Guru&lt;/strong&gt; , with amazing people and a culture I call ‘feels just like Home’. And through this I got to speak at the Sourth America’s TDC 2022 Conference, on the topic ‘&lt;strong&gt;Serverless Machine Learning&lt;/strong&gt;’.&lt;/p&gt;

&lt;p&gt;As for the more than 50 students I have trained through the bootcamp hosted by &lt;a href="//www.analyticscfd.com"&gt;www.analyticscfd.com&lt;/a&gt;, on professional skills like data analytics, data science and machine learning, I am also grateful for the success messages I receive from them, and most especially for what they taught me as we went through the different programs.I really wish to see these students continue to build big things in 2023 and keep impacting the lives of others. I learned a lot  more while teaching, than I learned while preparing for Certifications. &lt;/p&gt;

&lt;p&gt;This year summarizes to me what &lt;strong&gt;Steve Jobs&lt;/strong&gt; quote has been playing at the back of my mind, saying, ‘stay hungry, stay foolish’. We need to keep learning and keep pushing towards our dreams.&lt;/p&gt;

&lt;p&gt;This year has not been all roses, just like life is not a bed of roses, but I am very grateful for the blessings, opportunities and friends I have made. I cannot list all who have helped and supported me throughout this year. &lt;/p&gt;

&lt;p&gt;Please acccept this as this post as a way to show gratitude to you all, as I wish you all a very Happy New Year 2023. I pray we all accomplish even more together next year, and in whatever you do please do &lt;strong&gt;let the light shine&lt;/strong&gt;, as you share with the rest of the world.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Wish you Good Data Luck and a Happy 2023!!!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My top 10 re:invent 2022 sessions for Data Scientists and ML Developers</title>
      <dc:creator>Cyril Bandolo</dc:creator>
      <pubDate>Sun, 13 Nov 2022 11:33:38 +0000</pubDate>
      <link>https://forem.com/bandolocyril/my-top-10-reinvent-2022-sessions-for-data-scientists-and-ml-developers-11oh</link>
      <guid>https://forem.com/bandolocyril/my-top-10-reinvent-2022-sessions-for-data-scientists-and-ml-developers-11oh</guid>
      <description>&lt;p&gt;Once more reinvent is around the corner and we are excited to attend one of the biggest cloud events, where developers, customers, engineers, etc all come together to discuss the latest developments in the AWS cloud space, network and enrich their knowledge.&lt;/p&gt;

&lt;p&gt;The event is held in Las Vegas,  and runs from November 28th to December 2nd this year. It is a paid event when you attend it physically, but there is also a possibility to register and attend it online for free. You can check out the registration link &lt;a href="https://reinvent.awsevents.com/register/"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;, and chose which of the 02 options is convenient for you.&lt;/p&gt;

&lt;p&gt;The good news about reinvent is that there is a lot that is being covered and a lot of sessions you can attend. But the bad news is that given so many sessions, it is easy to get lost when you see the cavalry of sessions available for you to attend. That is the purpose of this guide.&lt;/p&gt;

&lt;p&gt;I have looked through the machine learning sessions for developers and data scientists interested in using  notebooks to solve their ML problems, and came up with the top 10 sessions I believe will be best to attend.&lt;/p&gt;

&lt;p&gt;It is true there are other sessions for those interested in low-code machine learning or those interested in building end-to-end ML pipelines through MLOps. Offcourse I love using notebooks as well as building end-to-end pipelines, but let us leave out the advanced MLOps pipeline now and focus on sessions for those data scientists and machine learning developers, interested to leverage coding with notebooks.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what then are my Top 10 sessions for re:Invent 2022?
&lt;/h2&gt;

&lt;p&gt;Below are the sessions I would advice you to try to add to attend:&lt;/p&gt;

&lt;p&gt;1.) &lt;strong&gt;AIM208 : Idea to production on Amazon SageMaker, with Thomson Reuters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the first session you need to attend. It ties everything together from serverless infrastructure, the tools required and the high level workflow. Whether as a beginner, a business analyst, developer or data scientist,  you would go through how to build, train and deploy ML models on AWS. You will follow through on how Sagemaker covers the steps in  ML lifecycle.&lt;/p&gt;

&lt;p&gt;2.) &lt;strong&gt;AIM210 : Solve common business problems with AWS AI/ML services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here you will see how companies are using machine learning and Artificial Intelligence (AI) across different industries. It would be good to get inspiration and set your creative juice flowing. Some of the use cases you will learn from include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How AI/ML can be used to boost customer experience and satisfaction&lt;/li&gt;
&lt;li&gt;How it can be used to speed up decision making in an organization&lt;/li&gt;
&lt;li&gt;How it helps in cost cutting&lt;/li&gt;
&lt;li&gt;How it is used in product development, to create new products.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So at the end of the session you should be able to sell what AI/ML can do for a company&lt;/p&gt;

&lt;p&gt;3.) &lt;strong&gt;ANT301 : Democratizing your organization’s data analytics experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the developments made in machine learning models and frameworks, the limiting factor for most machine learning projects now is tied to data. Looking back at data and analytics is very important now, more than ever before, and this session helps you do just that.&lt;/p&gt;

&lt;p&gt;You would learn how to leverage the analytics services available to gain better and faster insights from your data. You would also learn how to democratize your data. Learning to use the most optimized services to facilitate data preparation, and hence reducing the data challenges which machine learning models are currently facing.&lt;/p&gt;

&lt;p&gt;4.) &lt;strong&gt;BOA322 : Build and deploy a live, ML-powered music genre classifier&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The majority of machine learning problems are classification problems. So attending a session with classification is a good idea. Also, the Sagemaker Serverless Inference that was launched last year is a cost effective solution to deploying ML applications with highly volatile traffic. You would use some live music to learn how to deploy a classification model using the Sagemaker Serverless Inference.&lt;/p&gt;

&lt;p&gt;5.) &lt;strong&gt;AIM302 : Deploy ML models for inference at high performance &amp;amp; low cost, feat. AT&amp;amp;T&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this session you would learn the wide range of possibilities you have available from Sagemaker to be able to deploy your models and chose the most optimal inference option.&lt;/p&gt;

&lt;p&gt;Sagemaker inference options  which depend on the nature of your data , such as real-time, serverless, asynchronous or batch inference.&lt;/p&gt;

&lt;p&gt;These can also be split into single-model, multi-model, and multi-container endpoints.&lt;/p&gt;

&lt;p&gt;In this session, you will learn how AT&amp;amp;T , used sagemaker to optimize model deployment at scale.&lt;/p&gt;

&lt;p&gt;6.) &lt;strong&gt;BOA304 : Building a product review classifier with transfer learning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deep learning is on the rise, as data increases exponentially everyday. Natural Language programming applications are now very common. And their popularity only keeps increasing as people are requesting for solutions like&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test summarization&lt;/li&gt;
&lt;li&gt;Text classification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, in this NLP space , you must have noticed that Hugging Face is fast growing and delivering very accurate nlp models.&lt;/p&gt;

&lt;p&gt;So in this session you will see how to benefit from transfer learning, by leverage highly robust and performant Hugging face transformers solutions to solve your specific text problems.&lt;/p&gt;

&lt;p&gt;7.) &lt;strong&gt;AIM343 : Minimizing the production impact of ML model updates with shadow testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most of the times, beginners think machine learning ends after deployment. But in reality, it is just halfway when you deploy a model. You need to monitor and maintain the model.&lt;/p&gt;

&lt;p&gt;There are usually needs to do some A/B testing, release new versions of the ML models, updating serving containers, modify the underlying infrastructure. These can cause serious performance issues.&lt;/p&gt;

&lt;p&gt;This session will teach you how to use shadow testing to mitigate performance risks after your model has been deployed. You will see how HERE uses shadow mode to evaluate the performance of the models after deployment.&lt;/p&gt;

&lt;p&gt;8.) &lt;strong&gt;AIM320 : Boost ML development productivity with managed Jupyter notebooks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you love to build models using Jupyter notebooks, Amazon offers 02 options for you. This session will teach you how to use the quick-start notebook templates, already available AWS.&lt;/p&gt;

&lt;p&gt;You will learn how to launch standalone  Sagemaker notebook instances, that offer flexibility as to how you can use them for your workloads.&lt;/p&gt;

&lt;p&gt;9.) &lt;strong&gt;AIM322 : Accelerate data preparation with Amazon SageMaker Data Wrangler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sagemaker wrangler helps in data preparation, as it focuses on normalizing data and performing feature engineering. These are the first stages in the machine learning cycle.&lt;/p&gt;

&lt;p&gt;They usually include data selection, data cleaning, exploratory data analysis, bias detection and visualization.&lt;/p&gt;

&lt;p&gt;You will learn how to slash the data preparation time from weeks to minutes with SageMaker Data Wrangler.&lt;/p&gt;

&lt;p&gt;10.) &lt;strong&gt;ARC313-R : Building modern data architectures on AWS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is good to be up to date with the recommended infrastructure by engineers from AWS. These are usually tried and tested. We are now moving from data warehouse to a more modern infrastructure such as the use of data lakes, and optimized combinations of analytics services to facilitate the identification of deep and impactful insights as quickly as possible, as the value of data reduces over time.&lt;/p&gt;

&lt;p&gt;There are so many sessions, but these are the 10 I would chose for data scientists and machine learning developers who prefer using notebooks for their work.&lt;/p&gt;

&lt;p&gt;Hope it helps you get the best out of reinvent 2022.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Wish you Good Data Luck!!!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>reinvent2022</category>
      <category>sagemaker</category>
    </item>
    <item>
      <title>Serverless career Vs Machine Learning</title>
      <dc:creator>Cyril Bandolo</dc:creator>
      <pubDate>Fri, 22 Jul 2022 10:57:00 +0000</pubDate>
      <link>https://forem.com/bandolocyril/serverless-career-vs-machine-learning-1340</link>
      <guid>https://forem.com/bandolocyril/serverless-career-vs-machine-learning-1340</guid>
      <description>&lt;p&gt;In my country, we now have 02 Technical AWS Heroes (Serverless and Machine Learning), and this is really good for the diversity ...But there is a small problem to fix.&lt;/p&gt;

&lt;p&gt;During some of our local AWS meetups or even offline, some of our members have been reaching out to me as they are confused about which path to follow, especially when they hear the value proposition of both the Serverless and the Machine Learning paths, and they all look very attractive. They want to know which to choose.&lt;/p&gt;

&lt;p&gt;It is for this reason, I decided to write this article, to discuss it openly, because I believe there are many people out there who have these same doubts or would benefit from my opinion about these 02 paths.&lt;/p&gt;

&lt;p&gt;Also, I would invite other experts to share their opinion in the comments section.&lt;/p&gt;

&lt;p&gt;Now that we are all set, let us start our discussion.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So what is the root of this confusion?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;When you look at the roots of Serverless and the movement behind it, you will see it is closely related to a &lt;strong&gt;Full Stack development career&lt;/strong&gt;. If you take the time to analyze most tutorials on Serverless, you would languages like &lt;strong&gt;Nodejs, JavaScript, or TypeScript&lt;/strong&gt;. At the base, the discussions would mostly be around Rest APIs, GraphQL, and Infrastructure as Code. This confirms the leaders of the movement are web developers right?&lt;br&gt;
But is that truly what Serverless is?….. Do not worry. We will get to that later in this article.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, when we look at Machine Learning and it is closely related to the data careers, the reason you see most of the tutorials in &lt;strong&gt;Python&lt;/strong&gt;. Mostly the learning path here is for data science which employs statistics, data analysis, and modeling. &lt;br&gt;
Is that truly what Machine Learning is ?…. Most probably, even though it depends on the company.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So,on the one hand, you need &lt;strong&gt;maths and statistics&lt;/strong&gt; and a passion to generate insights for the business leaders (data science and machine learning), while on the other hand, you do not need much maths and statistics, but instead need to build highly responsive interactions with your application, having the &lt;strong&gt;customers UI/UX&lt;/strong&gt; in mind(full stack developer).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What then is my opinion about all this?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before the coming of e-commerce to the mainstream and the rise of AI, you could completely separate these 02 professions of Full Stack (or web development) and Machine Learning(or data science). Also, and for those who have the luxury of working in a large company, the link between these 02 might be irrelevant to you. Because in large companies being highly specialized in one sub-area in any of the 02 paths above is what is required.&lt;/p&gt;

&lt;p&gt;But not all  have the luxury of working in large companies, the majority of us are only finding opportunities in smaller companies and startups. In addition to that, AI has no signs of slowing down for one bit, with or without eCommerce.&lt;/p&gt;

&lt;p&gt;So if we are all on the same page now, let me go with my opinion, and again other experts can always chip in with their comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Now the modern ecommerce example again&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let us imagine once more that you are trying to build an eCommerce website this year. If you leave it completely in the hands of core web developers would have a  good interactive and responsive application with great UI/UX for the customer, but you would be missing out on harnessing the data customers are generating every second, to help generate you disruptive ideas to help move your entire business forward. This is because your typical web developer is good with the customer experience, but leveraging data does not come easily to them. While customers have been “spoiled” by Tech giant offerings nowadays, their fancy expectations from using smart apps like Amazon will not be met. No smart recommendation systems, no smart natural language processing to help with searches, autocomplete, etc. This is mainly because the system might not have been designed to accommodate these services in the first place since it does not come naturally to the core web developers.&lt;/p&gt;

&lt;p&gt;Also, we all can imagine what will happen if we decided to leave this project in the hands of core data scientists or machine learning engineers. It would be highly inefficient right?&lt;/p&gt;

&lt;p&gt;With very little satisfactory customer experience. This is because the data scientists would be more focused on generating insight for management. They normally do not have instincts for taking care of APIs, endpoints, and building interesting UI/UX. The first thing will be about how to collect and do some ETL to make the data ready for modeling. We all agree that the business would not survive without pivoting to something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Wait!!! Was the issue about Full Stack Vs Machine Learning?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;No. The original problem was about Serverless Vs Machine Learning, and not about Full Stack (web development) vs AI (or Machine Learning).&lt;/p&gt;

&lt;p&gt;That is actually the point. Even though the Serverless movement is supported by Full Stack or web developers who master Nodejs, Serverless in itself is not a profession as such, but instead a way of thinking. It is about designing your infrastructure to be more efficient in terms of cost, time, and reliability.&lt;/p&gt;

&lt;p&gt;Serverless is where &lt;strong&gt;FaaS&lt;/strong&gt; ( Function as a Service ) such as &lt;strong&gt;AWS Lambda&lt;/strong&gt; is at the core of the compute, in the place of EC2 instances.&lt;/p&gt;

&lt;p&gt;It is where Cloud native AI solutions like Rekognition are at the front, in favor of bringing your own dockerized model to ECR, and also where Sagemaker Serverless Inference is used to scale inference more flawlessly in favor of manually configuring these endpoints.&lt;/p&gt;

&lt;p&gt;Based on the examples above, despite the fact that Serverless seems to be linked to full stack developers alone, we can comfortably talk of Serverless Machine Learning. Where we do machine learning, but with a twist of thinking Serverless in the design of the entire infrastructure. For example, we first think about using cloud-native models like XGBoost in Sagemaker or integrating the APIs for AI Services like Rekognition, Comprehend, etc. Think of substituting EC2 instances with event-driven lambda functions for model inferences. Think of leveraging Sagemaker Serverless Inference with the overall objective of leveraging the benefits of Serverless (such as micro-services design, reduced cost, faster time-to-market, and highly reliable systems), where developers focus more on their application and leave the dirty work to the Cloud.&lt;/p&gt;

&lt;p&gt;It is true, that Serverless is not a magic bullet for all Machine Learning workflows, but it is still good to first think Serverless first, before deciding to look the other way.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So what is the conclusion about the path to take?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Since we now know that Serverless is a way of thinking, all this could still lead you to Serverless.&lt;/p&gt;

&lt;p&gt;In the beginning, if you have to choose between going the web development route or the machine-learning route, there are many articles around this, but below are my high-level recommendations.&lt;/p&gt;

&lt;p&gt;If you love maths, and statistics and are passionate about playing with data, you should start with the data science path, otherwise, if you love building systems that interact with customers and provide attractive UI/UX and responsive interactions then go for the Full Stack route. But never forget that even though Serverless lives best with Full Stack Developers, it can also live comfortably well with data scientists or machine learning engineers working in the cloud like AWS.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Wish you Good Data &amp;amp; Serverless Luck!!!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>machinelearning</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to become an AWS Hero</title>
      <dc:creator>Cyril Bandolo</dc:creator>
      <pubDate>Mon, 27 Jun 2022 11:50:43 +0000</pubDate>
      <link>https://forem.com/bandolocyril/how-to-become-an-aws-hero-do7</link>
      <guid>https://forem.com/bandolocyril/how-to-become-an-aws-hero-do7</guid>
      <description>&lt;p&gt;It was on a Friday evening, and all of a sudden, I was receiving many notifications from Twitter on my phone, within a very short time interval. It was unusual, to see many people starting to follow me within a couple of seconds or minutes. So it is when I checked that I saw the tweet from AWS which mentioned I had been nominated as a Machine Learning Hero. And then I saw myself as part of the list of the new AWS Heroes for that batch-The first &lt;strong&gt;Machine Learning Hero&lt;/strong&gt; in Sub-Saharan Africa, and currently 01 out of about 34 in the world.&lt;/p&gt;

&lt;p&gt;What is common amongst Heroes of all categories is that everyone’s story is different and no one can clearly tell how they actually ended up becoming a Hero. But since many people have been asking me how to become an AWS Hero all this while, I decided to write this article, so together we can look in retrospect at &lt;strong&gt;10 things I remember doing&lt;/strong&gt; last year which could inspire you to find your own path to becoming a Hero or to coach someone to become one. Again this article is not certain to land you a nomination as an AWS Hero.&lt;/p&gt;

&lt;p&gt;Nice!!! Now, that we are clear on that, let us rewind to see how it all started.&lt;/p&gt;

&lt;h2&gt;
  
  
  So how did it all start a year ago?
&lt;/h2&gt;

&lt;p&gt;Before last year, I had hosted a few apps in the cloud, but never on the AWS Cloud. My first encounter with AWS was last year, during our first local AWS Meetup, and for the first time I saw what benefits the AWS Cloud was bringing to the table, but what I never saw was about a year later I could become a Hero. Seems like the length of time really does not matter right? Or maybe it does? But not in my case.&lt;/p&gt;

&lt;p&gt;A couple of months after that first meetup, my friend and co-organizer of the local &lt;strong&gt;AWS UserGroup meetup&lt;/strong&gt; was nominated as the first AWS Serverless Hero in Africa, based on his contributions to the Serverless space. Certainly, it was upon this nomination that I got that extra motivation, and I started to believe in myself that I too can. My passion for the field of analytics and machine learning had met with a strong motivation to become a Hero in that space.&lt;/p&gt;

&lt;p&gt;If my nomination as the first-ever AWS Machine Learning Hero in Sub-Saharan Africa, from one of the most technologically nascent countries in the world, could motivate and inspire you by igniting that entropy of passion and motivation, to set you on the route to becoming a Hero; then this article would have served its purpose and also served my purpose of inspiring you to be the best you can, especially as I am just as ordinary as most of us. Assuming my pep talk was good, now let us see how you too can do it, based on some of the things I think I did right last year.&lt;/p&gt;

&lt;h2&gt;
  
  
  So how could you become an AWS Hero?
&lt;/h2&gt;

&lt;p&gt;Here are the 10 things I remember doing last year, which might have helped me become a hero. Again no one knows for sure how it works, but here are some of the things I did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Connecting with other Heroes&lt;/strong&gt;&lt;br&gt;
Why would one want to become a Hero, and the first item on the list is talking about connecting with other Heroes? Aren’t we supposed to go directly and start sharing our knowledge with the community, that way we gain credits from AWS, especially as we know Heroes are not employees of AWS?&lt;/p&gt;

&lt;p&gt;It is good to connect with past Heroes on social media. If not for the reason of getting some advice or mentorship, but also for the mere fact that by observing them for a while you would learn to see what they typically share and the kind of activities they do, which most of the time is not far from what they did before becoming Heroes in the first place.&lt;/p&gt;

&lt;p&gt;The second reason for connecting with them is that they are still very influential and could submit your name to AWS after being pleased with your work. You know what that means when a Hero says they believe you merit to be the next Hero right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Commenting and sharing other people’s works&lt;/strong&gt;&lt;br&gt;
Again aren’t you supposed to be sharing your work instead of other people’s work? At this rate when do you get to share your own works?&lt;/p&gt;

&lt;p&gt;I only have a single response to this question, which is “Karma”. What goes around comes around.&lt;/p&gt;

&lt;p&gt;If you see something good, why not share it? What’s the point. We are on social media and that is what we do on social media. In so doing you expand your network, build relationships and so you get more visibility and support when next you share your work. Easy right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Writing social media posts about my experience with AWS Services&lt;/strong&gt;&lt;br&gt;
Sure this one should be obvious to you as this shows you have some knowledge of AWS and you are also willing to share your knowledge to help your community. How would AWS know what you know if you do not share?&lt;/p&gt;

&lt;p&gt;Personally, I think I was obsessed with this one,especially when I had set an objective to post every single day on Linkedin, at least, for a very long period of time. And these are not just posts of 500 words or less. Rather, there were periods, for a couple of weeks straight, I would write posts of about 2 000 words every single day of the week and share them on Linkedin and even on Facebook. I constantly shared my experience and tips about data science and analytics, but nothing stopped me from sharing about how the audience can stay motivated while aquiring more skills, and even about setting great goals for the New Year. I just made sure I was human and did not go too far from my major core topics as well. Overall, anything I thought my followers would love around those topics, whether related to AWS directly or indirectly I went ahead to share. This is important because we are not trying to spam people, but to genuinely share what we believe would be beneficial to them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Writing end-to-end machine learning projects in my blog&lt;/strong&gt;&lt;br&gt;
I am sure you already know that there are 02 main segments of Heroes (Community Heroes and Technical Heroes-including Machine Learning, DevTools, Serverless, etc). Community Heroes do not necessarily need to be very Technical, but Technical Heroes need to be very Technical. And how more could you show that better than writing Technical blog posts?&lt;/p&gt;

&lt;p&gt;I wrote end-to-end machine learning blog posts. This starts right from framing a common problem in my locality,next scraping real-life data about it from local sources, and going through the entire machine learning lifecycle right up till deployment on Sagemaker. Lots of techniques are shared in these end-to-end projects, so the reader can get a more complete picture of what is needed to solve machine learning problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Advocating for the use of the new Sagemaker Studio lab&lt;/strong&gt;&lt;br&gt;
If you read some articles about becoming AWS Community Builders or Heroes, you will see that they mention the point that promoting newly released AWS services gives you credit in the eyes of AWS. This is logical and understandable. Every mother thinks the baby is the most beautiful. So if there is a new service, try it out and if you love it, then use it.&lt;/p&gt;

&lt;p&gt;AWS updates and releases new services at a very high rate throughout the year. Especially after re:invent , you get a ton of new releases being announced. Sagemaker is to Machine Leaning, just like Lambda is to Serverless. So if you see something new about the core service needed in your domain of interest, why not test and advocate for it to the rest of the world?&lt;/p&gt;

&lt;p&gt;I fell in love with the recently launched Sagemaker Studio Lab and was even using it to train my students (which is one of the value propositions for that service in the first place). So if there is a newly launched service you love or you think can help you then use it and share it, so others can benefit as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Participating in many AWS Workshops&lt;/strong&gt;&lt;br&gt;
Why would we participate in Workshops organized by AWS, instead of focusing only on being the ones who organize such workshops as we focus on always being the ones inviting others to come hear us speak?&lt;/p&gt;

&lt;p&gt;When you do so, you do not only get to connect with some of the organizers and speakers in these workshops or events, but you also get to see how good presentations should be made, how such events should be organized, and what is of value to AWS in these events.&lt;/p&gt;

&lt;p&gt;Last year I participated in many AWS workshops including Alexa Skill Training.Asking relevant questions and sharing my ideas with the hosts and the audience as well. That got me to learn a lot and also connect with the organizers and stay Top of mind with them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Speaking and Sharing in the local AWS User Group meetups :&lt;/strong&gt;&lt;br&gt;
AWS Meetups are usually a good avenue to get visibility for your contributions to your locality. They are usually an avenue to share with beginners in the cloud journey, about how to take the next step and advance their careers. So I did presentations like “Introduction to Sagemaker” in our local AWS User Group meetup. The good thing is? such contributions are usually highly valued by AWS, as they encourage growth as part of communities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Actively Participating and advocating for the Global Disaster Hackathon&lt;/strong&gt;&lt;br&gt;
If you see a competition being launched on machine learning with prizes worth over $50 000 and you are working towards becoming a Machine Learning Hero, why would you not join and even encourage others to participate?&lt;/p&gt;

&lt;p&gt;Last year was interesting after re:invent 2021, there were many interesting announcements for machine learning. And one of them was the Global Disaster Hackathon, where participants had to use machine learning to prevent or mitigate the risk and losses due to natural disasters. So I participated in this hackathon and was actively advocating for others to join as well. I even created a special webinar on my youtube channel to share with my followers how to approach the hackathon challenge and some hacks to get a head start in the competition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Joining the Community Builders Program&lt;/strong&gt;&lt;br&gt;
Not all Heroes come from the Community Builder’s Program, but most of the time more than 50% come from this program. So when you do the maths, can you see why you need to apply to join the next batch of the AWS Community Builders Program?&lt;/p&gt;

&lt;p&gt;Sure it is good.Besides only becoming a Hero out of it, there is a lot you benefit when you join this program. The biggest one is to network directly with like-minded builders. Alsosince many heroes were community builders before becoming Heroes, some would still be available here and you can get in touch with them very easily as a fellow community builder. It is interesting to be one of the first to get information about some AWS services before they go public while in the Community Builders Program. Do not miss applying for the program whenever it is announced. But if you do not get in, all hope is not lost. There is still a proposition of Heroes who do not come from this program.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Launching the “Sagemaker Saturdays” weekly youtube series&lt;/strong&gt;&lt;br&gt;
What about starting a youtube channel to share your projects and teach some basic programming to beginners in their cloud journey?&lt;/p&gt;

&lt;p&gt;I know not everyone needs a youtube channel to become a Hero, but have you ever thought about it or given it a shot?&lt;/p&gt;

&lt;p&gt;Though this is last on the list, it is one of the contributions I believe can have a huge influence on your eligibility. I was courageous enough to start a “Sagemaker Saturdays” Live coding series, where we build end-to-end Machine Learning projects from scratch till deployment on AWS Sagemaker. So every weekend I would host this youtube session for at least one hour, where together we build the codes from scratch most of the time and I explain why we are doing what we are doing as we build end-to-end machine learning projects. This was really heavy and I believe it also showed my dedication. Because hosting a live coding event can be an uphill task with lots of hours of work behind the scenes, and a big risk of something just going wrong mid-way. It is also very satisfying.&lt;/p&gt;

&lt;p&gt;Apart from the fact that I was teaching while I was learning and building, above are the 10 things I can remember actively doing last year. Did you notice any underlying thing in all of them?&lt;/p&gt;

&lt;p&gt;Let me help you out...&lt;strong&gt;It’s consistency&lt;/strong&gt;...&lt;/p&gt;

&lt;p&gt;That is why every Hero’s story always appears different, but if you listen closely, you would discover that there was that drive and that consistent effort to get the knowledge across and help the community. &lt;br&gt;
So would it be easy and there would be times you do not feel discouraged?&lt;/p&gt;

&lt;p&gt;No.But what should keep you going therefore? &lt;/p&gt;

&lt;p&gt;The principle of using your head first and the heart will follow. Sacrificing and when the messages of gratitude and recognition for your work come, your heart will be filled with joy and you will feel a big sense of accomplishment.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Conclusion ……
&lt;/h2&gt;

&lt;p&gt;Remember the AWS Heroes program recognizes not only your technical skills but also your passion and dedication to sharing with your community. Even though everyone’s story is different, the only single and most important glue in all of them is “Consistency”.&lt;/p&gt;

&lt;p&gt;So relax!!! Keep doing what you do and as you move on, do not forget this fact...&lt;/p&gt;

&lt;p&gt;You do not find the AWS Heroes program... The AWS Heroes Program will find you...&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Wish you Good Data Luck!!!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>hero</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How I passed the AWS Machine Learning Specialty Exam</title>
      <dc:creator>Cyril Bandolo</dc:creator>
      <pubDate>Tue, 07 Jun 2022 10:34:17 +0000</pubDate>
      <link>https://forem.com/bandolocyril/how-i-passed-the-aws-machine-learning-specialty-exam-4fgj</link>
      <guid>https://forem.com/bandolocyril/how-i-passed-the-aws-machine-learning-specialty-exam-4fgj</guid>
      <description>&lt;p&gt;I wrote the AWS Machine Learning Specialty Exam last week and passed. I am just 1 year into the AWS Cloud and this is my first ever AWS exam. So if you are new to AWS and considering taking this certification then this article will give you some tips on how to go about preparing for the exam.&lt;/p&gt;

&lt;p&gt;Remember as a Specialty exam, it is not an easy exam to pass, as it tests both your Machine Learning and AWS knowledge. And it is for this same reason (this blend of skills), that I was attracted to taking the certification. Therefore I could not wait to just go straight for it without going through the normally recommended exam path of first sitting for the Cloud Practitioner Exam, next, the Solutions Architect Exam, etc before Specialty Exams. One of the best ways to show your mastery of machine learning and at the same time position yourself in the modern era of cloud technologies is to take this Exam, offered by AWS.&lt;/p&gt;

&lt;p&gt;So I am going to share my experience. How I prepared for the exam and at the same time try to generalize the model to anyone who is not coming from the same context as me. Because we all have different contexts, and so what works for me may not work for you 100%.But getting that “Passed” phrase after 03 hours of taking the exams is what unites us all.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what was my specific context before the Exam?
&lt;/h2&gt;

&lt;p&gt;Here are 04 things about me that define the context in which I was preparing for the exam:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I had little knowledge about the AWS cloud. So I knew I had to spend more energy learning AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Never sat for any AWS exam before this one. So I knew it would be an uphill battle since everyone around me was against me going straight to the Specialty Exam without passing through the Cloud Practioner and Solutions Architect Exams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Was already comfortable with machine learning before. This was my main source of confidence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And finally, what started as just preparing to crack exams, I fell in love with the process and with Sagemaker, especially since I was sharing the knowledge with my subscribers on social media, on &lt;a href="https://www.youtube.com/watch?v=p24Oyj827Kk&amp;amp;list=PLlBN6LjOCNSiqBR7pOl62cnPNT9pLh8MR"&gt;my youtube channel&lt;/a&gt;, and in meetups.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alright…Enough about my context, at the end of the day we all want a “Passed” phrase after 03 hours (…while some want to have a score of 990/1000….). So let us look at what the exam is testing and what you should expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the pre-requisites for the Exam?
&lt;/h2&gt;

&lt;p&gt;First of all, let us discuss what the pre-requisites are, and it is best to copy what AWS says about the pre-requisite and paste them below:&lt;/p&gt;

&lt;p&gt;...AWS Certified Machine Learning – Specialty is intended for individuals who perform a development or data science role and have more than one year of experience developing, architecting, or running machine learning/deep learning workloads in the AWS Cloud. Before you take this exam, we recommend you have:&lt;/p&gt;

&lt;p&gt;At least two years of hands-on experience developing, architecting, and running ML or deep learning workloads in the AWS Cloud&lt;/p&gt;

&lt;p&gt;Ability to express the intuition behind basic ML algorithms&lt;/p&gt;

&lt;p&gt;Experience performing basic hyperparameter optimization&lt;/p&gt;

&lt;p&gt;Experience with ML and deep learning frameworks&lt;/p&gt;

&lt;p&gt;Ability to follow model training, deployment, and operational best practices...&lt;/p&gt;

&lt;p&gt;So now that you are aware of the recommended pre-requisites, let us look at the structure of the exam.&lt;/p&gt;

&lt;p&gt;What is the structure of the Exam?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the structure of the AWS Machine Learning Exam?
&lt;/h2&gt;

&lt;p&gt;Below are the different domains tested and the weights they carry in the Exam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jvJ7fcLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4uslima88gtsh7el7fct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jvJ7fcLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4uslima88gtsh7el7fct.png" alt="Image description" width="640" height="184"&gt;&lt;/a&gt;&lt;br&gt;
As you can see in the table above, there are 04 main domains. You can find the details below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;i.) Data Engineering (20%):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here the focus is on bringing (ingesting) data into AWS from multiple sources, transforming the data, and storing it.&lt;/p&gt;

&lt;p&gt;So knowledge of services like Glue, Kinesis, S3, and Spark will be tested. Remember data is the key asset and without it then there is no machine learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ii.) Exploratory Data Analysis (24%)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here the focus is on data cleaning (preparation), feature engineering, feature selection, data normalization or standardization, visualization, etc.&lt;/p&gt;

&lt;p&gt;In fact, this is where AWS skills will not help you. Data analytics is what will save you here. Your ability to clean and find patterns in data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;iii.) Modelling (36%)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The biggest part of the exam is here. Which model will you use in a specific scenario? How will you perform hyper-parameter tuning? How will you measure performance? Etc.&lt;/p&gt;

&lt;p&gt;To get it right, you need to have some hands-on experience on some machine learning projects, especially using popular algorithms like XGBoost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;iv.) Machine Learning Implementations and Operations (20%)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here you are focused on choosing the right resources, permissions, and settings to be able to benefit from running your workloads in the cloud. Your model has been deployed and you need to monitor its performance against other variants of the model and against data drift, etc.&lt;/p&gt;

&lt;p&gt;Because your model is now running live, you need to consider scalability, security, fault tolerance, etc&lt;/p&gt;

&lt;p&gt;So using your AWS knowledge from Solutions Architect is helpful here. Otherwise learning to apply these settings to your model is what is needed. Bring your model to live strategies to deploy the model in production and a bunch of security best practices.&lt;/p&gt;

&lt;p&gt;Enough of the contents… so how do we prepare for the exams?&lt;/p&gt;

&lt;h2&gt;
  
  
  So how did I prepare for the Exam?
&lt;/h2&gt;

&lt;p&gt;In my specific case I fell in love with the process of preparing for the exams, so I probably did more than was necessary just to write the exams. For example, for almost all the services I came across while studying, I practically deep-dived into all of them, by watching tutorial videos or reading technical documentation about these services. Also practicing them through my AWS account.&lt;/p&gt;

&lt;p&gt;Typically, we all do not need all that depth, to succeed, but below are the most efficient things to do which will help you pass the Exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. &lt;a href="https://www.udemy.com/course/aws-machine-learning/"&gt;AWS Certified Machine Learning Specialty on Udemy&lt;/a&gt;&lt;/strong&gt; :&lt;br&gt;
This Udemy course, taught by Frank Kane and Stephane Maareck is your number 1 resource when preparing for this exam. You will get lots of tips from Frank as you study this course which will help you pick the right answers during the exam. Normally this will not teach you everything for the exam, since no single course can, but it will teach you the most about the exam than many of the resources combined.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. &lt;a href="https://aws.amazon.com/training/classroom/exam-readiness-aws-certified-machine-learning-specialty/?nc1=h_ls"&gt;AWS Machine Learning Exam readiness&lt;/a&gt;&lt;/strong&gt; :&lt;br&gt;
This is also very important as you are touching base with AWS itself to see what they want you to know. Some services are well explained while others are suggested for further reading. Even though I followed through with all the further reading as suggested, you would not always need to.  Where you feel uncomfortable just read more about that aspect.&lt;/p&gt;

&lt;p&gt;Also, you would get a few sample questions, which look like what you would see in the real exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. &lt;a href="https://acloudguru.com/course/aws-certified-machine-learning-specialty"&gt;A Cloud Guru&lt;/a&gt; :&lt;/strong&gt;&lt;br&gt;
The biggest thing about them is the labs. So if you can afford it, subscribe and complete the labs. This will give you a feel for the Machine Learning services and how they are best used on AWS to solve business problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. AWS Reinvent videos&lt;/strong&gt; :&lt;br&gt;
If you do not have the possibility to take A-Cloud Guru practice labs, reinvent videos have a lot of case studies you can follow along, so you see how those services are used to solve real problems. Here you would also see best practices in Machine Learning on AWS which the exams would be testing you for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. &lt;a href="https://tutorialsdojo.com/aws-certified-machine-learning-specialty-exam-study-path-mls-c01/"&gt;Tutorials Dojo&lt;/a&gt;:&lt;/strong&gt;&lt;br&gt;
After studying and building all your notes from the previous resources, it is time to start getting your hands dirty by completing many exam-style pass questions.&lt;/p&gt;

&lt;p&gt;The best thing about this platform is to use the review mode to do your practice. That way after each question you get instant feedback about your answer and even links to AWS documentation for further reading. This helps a lot to clear any confusion you have about why your answer was wrong. From the corrections and links to documentation, enrich your notes with some clarification of your doubts, such as firehose can’t convert CSV to parquet, whereas Glue can. You may have overlooked this in your notes, especially if you did not do enough labs.&lt;/p&gt;

&lt;p&gt;Finally, you will get your score per section at the end of the practice exam, and you can use this feedback to know which sections to focus on to improve your overall score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. &lt;a href="https://www.examtopics.com/exams/amazon/aws-certified-machine-learning-specialty/"&gt;Exam topic&lt;/a&gt;&lt;/strong&gt;:&lt;br&gt;
The first thing I have to say is you need to avoid them, because they are exam dumps and can be very misleading. If you have gone through all the above and still need something to play with, then you can take them with a very critical mind for the answers.&lt;br&gt;
Many questions here  have wrong or conflicting answers provided by the public as it is open source. So because you have come this far, you will most likely have enough knowledge and conviction to identify the wrong answers that the majority has voted for. And when you visit the discussions around each answer, some have supporting documentation to convince you about the choice of their answers.But you should really be careful about these questions.&lt;/p&gt;

&lt;p&gt;Above are the core things to do and you would stand a high chance of cracking the exams.&lt;/p&gt;

&lt;p&gt;And finally, even though these practice questions had a few questions to help me garner a few points, if I had to do it again, I would probably skip purchasing them. Just for the fact that they were not very close to the kind of questions, I saw in the exams. They might need to update their questions to get closer to the way questions are set in the real exams.The exam has many lengthy questions and the kind of context that is being tested in these practice questions below is not very close to the questions in the Exam.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.udemy.com/course-dashboard-redirect/?course_id=2674140"&gt;Abhishek Singh’s Practice Test&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.udemy.com/course/aws-machine-learning-practice-exam/learn/quiz/4713752/results?expanded=671959524#overview"&gt;Frank Kane’s Practice Tests&lt;/a&gt;&lt;br&gt;
Anyway, they helped me grab a few points, though, but if I was only going for a “Pass” and tight on budget, I could manage to bypass them.&lt;br&gt;
Now let’s go to the exam itself.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So on the Exam Day?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The advice of trying to have a good night’s sleep is still very valid, so you stay sharp during the exams. As for me, I even did some exercises in the morning of the exams to pump up and energize myself, which I found beneficial. But you can skip the exercises.&lt;/p&gt;

&lt;p&gt;Before the exams, ensure you leave enough time to revise all the notes you have been taking since the beginning.  This is the time you need to take one last look at them, so they stay fresh in your mind as you get into the exams. You might remind yourself that Amazon Rekognition could also handle topics and classification, for example. Things you can easily miss if you have not been using Rekognition and might not stick when you read them for the first time.&lt;/p&gt;

&lt;p&gt;The exam contains 65 questions, to be answered within 180 minutes. When writing the exams, my advice is to go quickly through all the questions, while flagging the confusing or difficult ones for review. Remember some questions are what we called “Examiners Questions” which are just test questions against future exams and will not count in your current exam score.&lt;/p&gt;

&lt;p&gt;With that understanding, you need to attempt all questions and do not stress if a question looks out of scope, because it probably is one of those “Examiners questions” .&lt;/p&gt;

&lt;p&gt;Review all your answers at least two times, before ending the exam. There are some answers you selected when going through the first time, whereas upon critical review while you are reviewing all your answers, you might need to change them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;In Conclusion?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you just want the minimum then focus on the &lt;a href="https://aws.amazon.com/training/classroom/exam-readiness-aws-certified-machine-learning-specialty/?nc1=h_ls"&gt;AWS Machine Learning on Udemy&lt;/a&gt;, &lt;a href="https://aws.amazon.com/training/classroom/exam-readiness-aws-certified-machine-learning-specialty/?nc1=h_ls"&gt;AWS Readiness&lt;/a&gt;, and then practice with &lt;a href="https://tutorialsdojo.com/aws-certified-machine-learning-specialty-exam-study-path-mls-c01/"&gt;Tutorials Dojo&lt;/a&gt;. Try to do as many labs as you can to get a feel for the services. Always update your notes with new learnings or things you thought you knew, but had forgotten or could not recall them quickly as the answer to a question.&lt;/p&gt;

&lt;p&gt;Then on the day of the exam, make sure you are well-rested, go quickly through all the questions within about about 80 mins maximum, then use the rest of the time to review all the questions from the beginning, while critically thinking through your initial answers to the questions. And if you still have time, you can go through the flagged questions for the last time and make sure you answer them with your best guess. And finally, you can end the exams.&lt;/p&gt;

&lt;p&gt;Hope to see you on the other side soon.&lt;br&gt;
_&lt;br&gt;
&lt;strong&gt;Wish you Good Data Luck!!!&lt;/strong&gt;&lt;br&gt;
_&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
