<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Samuel Kalu</title>
    <description>The latest articles on Forem by Samuel Kalu (@eskayml).</description>
    <link>https://forem.com/eskayml</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/eskayml"/>
    <language>en</language>
    <item>
      <title>Exploratory Data Analysis on the Iris Flower Dataset</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Tue, 02 Jul 2024 13:05:23 +0000</pubDate>
      <link>https://forem.com/eskayml/exploratory-data-analysis-on-the-iris-flower-dataset-184b</link>
      <guid>https://forem.com/eskayml/exploratory-data-analysis-on-the-iris-flower-dataset-184b</guid>
      <description>&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;This is my submission of stage zero in the HNG 11 internship, I am currently deep exploring the field of data analysis , I believe this internship gives me the opportunity to learn and grow more in this field&lt;/p&gt;

&lt;p&gt;To know more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://hng.tech/internship" rel="noopener noreferrer"&gt;https://hng.tech/internship&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://hng.tech/hire" rel="noopener noreferrer"&gt;https://hng.tech/hire&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Observation from first glance
&lt;/h2&gt;

&lt;p&gt;Looking at the Iris dataset from first glance,&lt;br&gt;
The Iris flower dataset comprises 150 samples with four features each: sepal length, sepal width, petal length, and petal width, distributed across three species: Iris-setosa, Iris-versicolor, and Iris-virginica, with 50 samples per species&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhc2ylly5szit0d5ns38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhc2ylly5szit0d5ns38.png" alt="Image description" width="503" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3b0xv2k5wcvidz3luzf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3b0xv2k5wcvidz3luzf.jpeg" alt="Image description" width="363" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploratory Data Analysis
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf6p51f4ujdwrnsf8rou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf6p51f4ujdwrnsf8rou.png" alt="Image description" width="800" height="735"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pairplot above easily summarizes how the entire distribution of the 4 features are against the target variable.&lt;/p&gt;

&lt;p&gt;We can infer all of the above&lt;/p&gt;

&lt;p&gt;The pairplot of the Iris dataset provides a visual summary of the relationships between the four features (sepal length, sepal width, petal length, and petal width) for the three Iris species: setosa, versicolor, and virginica. Here are some detailed observations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Species Separation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Iris-setosa&lt;/strong&gt;: This species is distinctly separated from the other two species in almost all pairwise comparisons. The petal length and petal width features are particularly effective in distinguishing Iris-setosa, as the points representing this species form a distinct cluster in the lower left corner in the petal length vs. petal width plot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iris-versicolor and Iris-virginica&lt;/strong&gt;: These two species overlap more but show some degree of separation. The petal length and petal width features again provide good separation, with Iris-versicolor generally having smaller petal measurements compared to Iris-virginica. However, there is still some overlap between these two species in the middle range of the feature values.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Feature Distributions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The diagonal plots show the kernel density estimates (KDE) for each feature within each species. These plots reveal that the distribution of each feature varies significantly between species. For example, Iris-setosa has a much narrower and distinct distribution for petal length and petal width compared to the other two species.&lt;/li&gt;
&lt;li&gt;Sepal length and sepal width have more overlapping distributions, especially between Iris-versicolor and Iris-virginica, making them less effective for classification on their own.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Inter-feature Relationships&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is a noticeable positive correlation between petal length and petal width across all species, particularly within Iris-versicolor and Iris-virginica.&lt;/li&gt;
&lt;li&gt;Sepal length and petal length also exhibit a positive correlation, especially for Iris-versicolor and Iris-virginica, while Iris-setosa remains distinctly separated.&lt;/li&gt;
&lt;li&gt;Sepal width shows a weaker correlation with other features compared to the petal measurements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Within-Species Variability&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Iris-setosa shows low variability in petal measurements, which are consistently small.&lt;/li&gt;
&lt;li&gt;Both Iris-versicolor and Iris-virginica exhibit more variability in their petal measurements, with Iris-virginica generally showing the largest measurements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  CORRELATION
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuijw2jeez8j4wsjqnvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuijw2jeez8j4wsjqnvk.png" alt="Image description" width="745" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The correlation matrix heatmap of the Iris dataset reveals the relationships between the features. Sepal length shows a strong positive correlation with petal length (0.87) and petal width (0.82). Petal length and petal width are highly correlated (0.96), indicating that as petal length increases, petal width also tends to increase significantly. Sepal width, on the other hand, has a weak negative correlation with sepal length (-0.12) and moderate negative correlations with petal length (-0.43) and petal width (-0.37). These insights suggest that petal measurements are more strongly interrelated compared to sepal measurements, which are less correlated with each other and with petal measurements&lt;/p&gt;

&lt;p&gt;Thanks so much for reading😊, Cya👋.&lt;/p&gt;

</description>
      <category>hng</category>
      <category>python</category>
      <category>dataanalysis</category>
      <category>datascience</category>
    </item>
    <item>
      <title>How to Land a Job as an AI Engineer 🤖</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Thu, 26 Oct 2023 00:26:08 +0000</pubDate>
      <link>https://forem.com/eskayml/how-to-land-a-job-as-an-ai-engineer-718</link>
      <guid>https://forem.com/eskayml/how-to-land-a-job-as-an-ai-engineer-718</guid>
      <description>&lt;p&gt;If you've set your sights on a career as an AI engineer, you're in the right place. Landing a job in this exciting field requires a combination of technical skills, experience, and a strategic approach to job hunting. In this comprehensive guide, we'll walk you through the steps to help you secure that coveted position as an AI engineer. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Master the Fundamentals 🧠
&lt;/h2&gt;

&lt;p&gt;Before you dive into the job search, it's essential to ensure you have a strong foundation in artificial intelligence and machine learning. This means understanding the basic principles, algorithms, and programming languages used in AI development. Python, TensorFlow, and PyTorch are some of the key tools you should be comfortable with. Consider taking online courses or earning a degree in computer science or AI to bolster your knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Build a Stellar Portfolio 📚
&lt;/h2&gt;

&lt;p&gt;One of the most effective ways to stand out as an AI engineer is by showcasing your skills and projects. Create a portfolio that highlights your AI-related work, including personal projects, research, or contributions to open-source initiatives. Potential employers will be impressed by tangible examples of your abilities, so make sure to provide detailed explanations of the projects you've undertaken.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Network, Network, Network 🌐
&lt;/h2&gt;

&lt;p&gt;Networking is often the key to unlocking great job opportunities in the tech industry. Attend AI conferences, seminars, and meetups. Connect with professionals on LinkedIn and participate in AI-related forums and communities. Building a strong network can help you access unadvertised job openings and gain insights into the industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Customize Your Resume 📃
&lt;/h2&gt;

&lt;p&gt;Your resume is your first impression on potential employers. Tailor it to the specific job you're applying for by emphasizing the skills and experiences relevant to the role. Highlight your AI projects, technical skills, and any relevant certifications or publications. Don't forget to include a concise yet compelling summary statement at the top.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Ace the Technical Interviews 💡
&lt;/h2&gt;

&lt;p&gt;AI engineer interviews often include technical assessments, so be prepared to showcase your skills. Practice coding challenges, algorithm questions, and be ready to discuss your past projects in depth. Consider joining AI-related coding platforms to hone your problem-solving abilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Stay Informed 📰
&lt;/h2&gt;

&lt;p&gt;The field of AI is ever-evolving. To be a competitive AI engineer, you must stay updated with the latest trends and breakthroughs. Follow leading AI research journals, subscribe to AI-focused newsletters, and participate in online courses to keep your knowledge fresh and relevant.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Seek Internships or Entry-Level Positions 🚀
&lt;/h2&gt;

&lt;p&gt;If you're new to the AI industry, consider starting with internships or entry-level positions to gain practical experience. These opportunities can help you build your professional network and add valuable entries to your resume.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Prepare for Behavioral Interviews 🤝
&lt;/h2&gt;

&lt;p&gt;AI engineer roles often require collaboration and problem-solving in a team setting. Be ready to answer behavioral interview questions that assess your interpersonal and communication skills. Use the STAR method (Situation, Task, Action, Result) to structure your responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Leverage Online Job Platforms 💻
&lt;/h2&gt;

&lt;p&gt;When you're ready to apply, use online job platforms like LinkedIn, Glassdoor, and Indeed. Customize your applications for each position, and follow up with personalized messages to express your enthusiasm for the role.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Stay Persistent and Positive 🌟
&lt;/h2&gt;

&lt;p&gt;The job search process can be challenging, but maintaining a positive attitude and staying persistent is crucial. Rejections are part of the journey, and each one brings an opportunity to learn and improve.&lt;/p&gt;

&lt;p&gt;In conclusion, securing a job as an AI engineer requires dedication, continuous learning, and a strategic approach. Master the fundamentals, build a strong portfolio, network with industry professionals, and tailor your application materials to each job. By following these steps and staying committed, you can position yourself as a top candidate and land the AI engineer job you've been dreaming of.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>tech</category>
      <category>jobhunting</category>
    </item>
    <item>
      <title>How Machine Learning Supercharges Cybersecurity</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Sun, 23 Jul 2023 18:54:59 +0000</pubDate>
      <link>https://forem.com/eskayml/how-machine-learning-supercharges-cybersecurity-1ib9</link>
      <guid>https://forem.com/eskayml/how-machine-learning-supercharges-cybersecurity-1ib9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In an era where cyber threats are evolving at lightning speed, the traditional approaches to cybersecurity are struggling to keep pace. Enter machine learning , a revolutionary technology that is changing the game and supercharging cybersecurity. In this article, we will explore how machine learning is transforming the world of cybersecurity, safeguarding us against an ever-growing array of digital dangers&lt;/p&gt;

&lt;p&gt;Here I would be mentioning some applications of machine learning in the field of Cybersecurity&lt;/p&gt;

&lt;h2&gt;
  
  
  Detecting Anomalies with Unparalleled Precision
&lt;/h2&gt;

&lt;p&gt;Traditional security systems often rely on predetermined rules to detect and prevent cyber threats. While these rules serve as a basic line of defense, they struggle to keep up with the ever-changing tactics of hackers. Machine learning, on the other hand, uses sophisticated algorithms to analyze vast amounts of data, enabling it to identify patterns and anomalies that would be impossible for human operators to spot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksvrbd8lbswck0j5dfeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksvrbd8lbswck0j5dfeo.png" alt="Anomaly" width="260" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By continuously learning from new data, machine learning models can adapt to emerging threats and detect abnormal behavior in real-time. Whether it's a stealthy malware infection or an attempted data breach, these intelligent algorithms can raise the alarm promptly, allowing security teams to respond swiftly and prevent potential disasters.&lt;/p&gt;

&lt;h2&gt;
  
  
  DDOS(Distributed Denial of Service) Attacks Mitigation
&lt;/h2&gt;

&lt;p&gt;Machine learning plays a vital role in fortifying cybersecurity defenses, and when it comes to DDoS prevention, it becomes an invaluable ally. Imagine it as a watchful guardian, constantly scanning  for potential threats. DDoS attacks are usually those nasty floods of traffic meant to overwhelm websites, can cripple online services, causing headaches for users and businesses alike.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ehn35drn6r1r0h3kmsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ehn35drn6r1r0h3kmsv.png" alt="Ddos" width="227" height="222"&gt;&lt;/a&gt;&lt;br&gt;
However, machine learning steps in as a superhero with its data-crunching prowess. It learns from past attack patterns, identifying anomalies and suspicious activities in real-time. This helps it swiftly recognize and mitigate these  DDoS attacks, acting like a shield to keep websites running smoothly and securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Malware detection
&lt;/h2&gt;

&lt;p&gt;Malware, short for malicious software, poses a grave threat in the digital landscape. To tackle this menace effectively, machine learning emerges as a powerful ally. Machine learning equips computer systems with the ability to learn and recognize patterns from vast datasets. In malware detection, ML algorithms scrutinize code and behaviors to spot new and emerging threats, even those previously unseen. By continuously adapting and evolving, these smart systems stay one step ahead of cyber criminals, safeguarding individuals and organizations from potential harm. Through its intelligent and swift analysis, machine learning fortifies our cybersecurity defenses, making our digital world safer and more resilient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gx2tdztaychauw48v1t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gx2tdztaychauw48v1t.jpeg" alt="Malware detection" width="294" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some other applications of Machine Learning which I couldn't cover here are:&lt;/p&gt;

&lt;p&gt;-Spam Detection&lt;br&gt;
-Phishing detection&lt;br&gt;
-User Behavioral Analytics&lt;br&gt;
-Network security, and so on.&lt;/p&gt;

&lt;p&gt;See you next time😉&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Machine Learning in Healthcare: How AI is Improving Patient Care</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Sat, 22 Jul 2023 16:18:20 +0000</pubDate>
      <link>https://forem.com/eskayml/machine-learning-in-healthcare-how-ai-is-improving-patient-care-4ml</link>
      <guid>https://forem.com/eskayml/machine-learning-in-healthcare-how-ai-is-improving-patient-care-4ml</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the fast-paced world of technology, the healthcare industry is experiencing a revolutionary transformation with the integration of Artificial Intelligence (AI) and Machine Learning (ML). These cutting-edge technologies are not just buzzwords; they are superheroes changing the face of patient care. In this article, we'll embark on a captivating journey exploring how AI-driven ML is redefining healthcare, making it more personalized, efficient, and ultimately, life-saving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Rise of AI in Healthcare&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI and ML are not just science-fiction dreams; they are tangible tools making a real impact on patient care. In healthcare, AI acts as a skilled assistant to medical professionals, augmenting their expertise with data-driven insights. Machine Learning algorithms analyze vast amounts of patient data, helping doctors make accurate diagnoses and tailor treatment plans to individual needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Empowering Early Detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine a world where diseases can be detected even before they manifest symptoms. AI-powered ML systems are making this dream a reality. By continuously monitoring patients' health data in real-time, these vigilant systems identify minute changes and red flags that may indicate an underlying health issue. Early detection means early intervention, potentially saving lives and preventing complications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y31nz43lf3p8b2ez30u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y31nz43lf3p8b2ez30u.jpg" alt="Early Detection" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Personalized Healthcare Revolution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One-size-fits-all treatment approaches are becoming a thing of the past, thanks to AI. Machine Learning algorithms can analyze a patient's genetic makeup, lifestyle habits, and medical history to create personalized treatment plans. This level of customization ensures that patients receive the most effective and appropriate care, improving outcomes and reducing unnecessary medical procedures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkkpeeyzvhzomfcx9iqk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkkpeeyzvhzomfcx9iqk.jpg" alt="personalized healthcare revolution" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhancing Medical Imaging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The field of medical imaging has witnessed a major overhaul with AI's intervention. ML algorithms excel at analyzing complex medical images, such as X-rays, MRIs, and CT scans, with remarkable accuracy. This not only expedites the diagnostic process but also helps in identifying early-stage abnormalities that might be overlooked by the human eye. AI-driven image analysis ensures no detail goes unnoticed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqogbxvh9f15ox7c15qi1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqogbxvh9f15ox7c15qi1.jpg" alt="enchancing medical images" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlining Administrative Tasks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI doesn't just benefit patients; it also lends a helping hand to healthcare providers. By automating administrative tasks like appointment scheduling, billing, and record-keeping, AI streamlines operations, allowing medical professionals to focus more on patient care. This boost in efficiency means reduced waiting times, smoother workflows, and an overall enhanced patient experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fighting Pandemics with Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a health crisis strikes, such as a pandemic, AI can be a powerful ally in managing and containing the spread of diseases. ML models can process vast amounts of data from various sources, predicting disease hotspots, analyzing infection patterns, and even aiding in vaccine development. With AI on our side, we can arm ourselves with data-driven strategies to combat outbreaks effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakr0ktfdyy7l4avnqp0a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakr0ktfdyy7l4avnqp0a.jpg" alt="fighting pandemics" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ethical Considerations and the Human Touch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While the benefits of AI in healthcare are immense, we must also address ethical considerations. Patient data privacy and security are paramount, and transparent guidelines must be established to safeguard sensitive information. Additionally, AI should 'augment' human expertise rather than replace it. The human touch in healthcare, including empathy and intuition, remains invaluable in patient care.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Road Ahead&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As AI and ML continue to advance, the future of healthcare appears brighter than ever. With the ongoing collaboration between medical experts and tech innovators, we can expect even more groundbreaking solutions. The synergy between human knowledge and AI-driven insights will lead to better treatment options, faster diagnoses, and more accessible healthcare for all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The integration of AI and Machine Learning in healthcare is a remarkable journey towards a more patient-centric and efficient system. From empowering early detection to personalizing treatment plans, these technological marvels are revolutionizing patient care. As we embrace this new era of healthcare, we must ensure that ethical considerations and the human touch remain at the heart of this transformation. With AI as our ally, we embark on a promising path to a healthier and happier future.🚀🚀&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>healthcare</category>
      <category>tech</category>
      <category>ai</category>
    </item>
    <item>
      <title>Introduction to Open Source</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Fri, 26 May 2023 21:53:15 +0000</pubDate>
      <link>https://forem.com/eskayml/introduction-to-open-source-4gae</link>
      <guid>https://forem.com/eskayml/introduction-to-open-source-4gae</guid>
      <description>&lt;h3&gt;
  
  
  What exactly is open source?
&lt;/h3&gt;

&lt;p&gt;Open source refers to the collaborative approach of developing software where the source code is freely available, allowing anyone to view, modify, and distribute it. &lt;/p&gt;

&lt;p&gt;This helps to foster a community driven ecosystem that allows anybody around the world to copy , modify  and collaborate  on building software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51njdmow4xkc6w84oabb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51njdmow4xkc6w84oabb.png" alt="Image description" width="318" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It brings forth a lot of transparency in how software is built when everyone and their grandmother can actually see the source code and identify any flaws.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Experience in Open Source
&lt;/h3&gt;

&lt;p&gt;Exactly Three months ago , I took it upon myself to venture on a 30 day challenge in Open Source, my objective , to actively participate in open source development , whether it was code  contributions , helpful comments or documentation fixes. I decided to document my journey by constantly tweeting my progress.&lt;/p&gt;

&lt;h3&gt;
  
  
  Just do it
&lt;/h3&gt;

&lt;p&gt;At the start of the challenge, I absolutely had no idea about what I was going to contribute to as there was an unbelievable gap between my current knowledge about coding and what I was actually seeing on the repositories I planned to contribute to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0j7i5avygk97q7svhgx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0j7i5avygk97q7svhgx.jpg" alt="Image description" width="720" height="1077"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It might be quite counter-intuitive but the only way to get better is by actually surpassing your limits and actually making those contributions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start Small
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hobquwzn9uo5x63859i.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hobquwzn9uo5x63859i.jpeg" alt="Image description" width="301" height="168"&gt;&lt;/a&gt;&lt;br&gt;
So whatever repository you plan to contribute to, I assure you , you most likely aren't going to understand the codebase the first time you go there. So you start by raising an issue(it could be in form of a question about a bug you've experienced in the past with the library) , you could easily get more experienced contributors put you through and answer your question&lt;/p&gt;

&lt;p&gt;You see!, Just asking a question in itself already counts as open source contribution&lt;/p&gt;

&lt;p&gt;Most open source communities(as far as I know) are very welcoming and would not hesitate to help you out.&lt;/p&gt;

&lt;p&gt;Still on starting small , most repositories have a "good first issue" policy, this is a tag found on their issues page and it consists of issues/bugs in the repository that are quite easier to resolve. It mostly is a dependency issues,  import errors or even typos.&lt;/p&gt;

&lt;p&gt;Speaking about typos , yes they also count as Open source contributions😂.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create your own Open Source projects
&lt;/h3&gt;

&lt;p&gt;There is no better way of contributing to open source projects than actually writing your own , and quick tip , it doesn't even have to be code , it could be a list of some useful tools for developers which would absolutely be appreciated.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency
&lt;/h3&gt;

&lt;p&gt;Proceeding with my journey , I subconsciously proceeded from making typo-fix contributions to actually making code contributions, and it became quite interesting.&lt;/p&gt;

&lt;p&gt;I finished my 30 days challenge, and ended up contributing to over 10 repositories.I also got to learn useful things about software projects like how unit testing is done and the importance of most weird files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Talk More
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8sfyifudbaljn0nczzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8sfyifudbaljn0nczzw.png" alt="Image description" width="719" height="644"&gt;&lt;/a&gt;&lt;br&gt;
What's the point in doing open source if you don't talk about it .Talk about it everywhere, to your friends, to your dog, in church, also whenever you go to the club🙂.&lt;/p&gt;

&lt;p&gt;On a more serious note , the secret to enjoying open source is continuous interaction. The codebase becomes seemingly clearer and uncomplicated the more you interact with other people on the project, you could interact with other contributors mostly through a discord / IRC channel which is available for most repositories&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Of course I wouldn't end this writeup without giving you some resources to aid you in your journey&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.w3schools.com/git/" rel="noopener noreferrer"&gt;Learn Git Properly&lt;/a&gt; - This teaches you git , a version control system &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://up-for-grabs.net/" rel="noopener noreferrer"&gt;Up For Grabs&lt;/a&gt; - Shows you repositories that need your attention , it's quite amazing because it's mostly beginner rated issues that are quite easy to solve&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.eddiehub.org/" rel="noopener noreferrer"&gt;Eddie Hub&lt;/a&gt;- An amazing Open Source community that I'd absolutely recommend for you  to join&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading 👋&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>programming</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Introduction to BERT Language Model</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Tue, 07 Mar 2023 12:36:12 +0000</pubDate>
      <link>https://forem.com/eskayml/introduction-to-bert-language-model-mec</link>
      <guid>https://forem.com/eskayml/introduction-to-bert-language-model-mec</guid>
      <description>&lt;p&gt;BERT, an acronym for Bidirectional Encoder Representations, is a language model architecture that was created by Google in 2018. &lt;/p&gt;

&lt;p&gt;This architecture was trained using a massive dataset consisting of approximately 2.5 billion words from the entire Wikipedia library and around 800 million words from the Google Books Corpus.&lt;/p&gt;

&lt;p&gt;Training a model using such a large dataset would typically require a very long period of time, but thanks to the newly introduced Transformer architecture and the use of high-speed TPU's (Tensor Processing Units), Google was able to complete the training process in just four days&lt;/p&gt;

&lt;p&gt;So as I said in the paragraph above , it is built on the  transformer architecture a novel approach to NLP modelling which uses techniques like self-attention to identify the context of words.&lt;/p&gt;

&lt;p&gt;Transformers usually consist of encoder and decoder blocks , but the Bert architecture only used encoders stacked onto one another.&lt;/p&gt;

&lt;p&gt;Google initially released 2 versions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Bert Base: with 12 encoders&lt;/li&gt;
&lt;li&gt;Bert large: with 24 encoders&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bert was such a hit because it could also be used for many NLP problems like sentiment analysis, text summarization, and question answering.&lt;/p&gt;




&lt;p&gt;We would be building a simple sentiment analysis classifier using a Bert model and the &lt;em&gt;transformers&lt;/em&gt; library&lt;/p&gt;

&lt;p&gt;You would want to use a Jupyter notebook for this tutorial(a Google Colab Environment will be preferable)&lt;/p&gt;

&lt;p&gt;Firstly , we install these 2 libraries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install transformers torch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we proceed to use these few lines of code for importing and loading our model&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;
&lt;span class="n"&gt;pipe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sentiment-analysis&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# the pipeline object defaults to using a lightweight version of BERT
&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can simply check the semantic score of a piece of text by doing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="nf"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;This book was amazing , great read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;#[{'label': 'POSITIVE', 'score': 0.9998821020126343}]
&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="nf"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The pig smelled very terribly&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;#[{'label': 'NEGATIVE', 'score': 0.9984949827194214}]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;transformers&lt;/code&gt; library implements a lot of large language models very easily in python with only a few lines of code &lt;/p&gt;

&lt;p&gt;You can check out a lot of other usecases of BERT here on &lt;a href="https://huggingface.co/docs/transformers/model_doc/bert" rel="noopener noreferrer"&gt;HuggingFace&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for the read, See ya👋&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Introduction to Ensemble modelling</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Mon, 06 Mar 2023 13:11:54 +0000</pubDate>
      <link>https://forem.com/eskayml/introduction-to-ensemble-modelling-1k6h</link>
      <guid>https://forem.com/eskayml/introduction-to-ensemble-modelling-1k6h</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Ensembling?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensembling is simply a way of aggregating predictions from different machine learning models with the hope of creating a much better model capable of generalizing well to new data.&lt;/p&gt;

&lt;p&gt;Ensembling provides a sense of confidence in our predictions by leveraging the collective strength of multiple models. As living beings, we understand the concept of "unity in strength", the same principle also applies to machine learning models. By combining the strengths of individual models, ensembling can improve the accuracy and robustness of predictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faprb0opy4mq2z6jtpugi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faprb0opy4mq2z6jtpugi.jpg" alt="apes description" width="640" height="431"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Ensemble algorithms, such as random forest, XGBoost, CatBoost, and AdaBoost, utilize multiple weak learners to achieve impressive results. These weak learners are typically small decision trees with limited depth and features&lt;/p&gt;




&lt;h3&gt;
  
  
  Types of Ensembles
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Boostrapping aggregation&lt;/strong&gt; &lt;em&gt;(e.g random forest)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is also known as &lt;em&gt;bagging&lt;/em&gt;, with this method , the model is trained on few "&lt;em&gt;boostrapped&lt;/em&gt;" datasets  ,  bootstrapped datasets are basically  variants of the original training set but with repeated or missing samples. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatww6jqom7e54r83l0q8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatww6jqom7e54r83l0q8.png" alt="bagging description" width="224" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multiple weak learners are trained on those bootstrapped datasets and a voting of corresponding predictions is carried out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boosting&lt;/strong&gt;&lt;em&gt;(e.g catboost, xgboost)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This also uses weak learners , but here the weak learners get better by correcting the errors of their preceding learners.&lt;br&gt;
To visualize this , the data points are usually assigned equal weights , then a model is trained on those weights. The misclassified samples from the data are then assigned larger weights and then retrained, this done multiple times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn8nfuiktz4bbrux8his.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn8nfuiktz4bbrux8his.png" alt="boosting description" width="440" height="248"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Stacking&lt;/strong&gt;&lt;br&gt;
In stacking, the predictions of several models are used as input to another model, called the meta-model. The meta-model is trained on the predictions of the base models to make the final prediction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpyft05nbkhr870whew9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpyft05nbkhr870whew9.png" alt="stacking description" width="342" height="147"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;There are myraid of creative ways even outside of the popular methods in which one can ensemble two or more different models together. By combining models we tend to reduce their weaknesses and amplify their strengths.&lt;/p&gt;

&lt;p&gt;Thanks for reading, Adios👋.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ensemble</category>
      <category>xgboost</category>
      <category>datascience</category>
    </item>
    <item>
      <title>INTRODUCTION TO MACHINE LEARNING, A DUMMY APPROACH</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Sat, 24 Sep 2022 10:13:26 +0000</pubDate>
      <link>https://forem.com/eskayml/introduction-to-machine-learning-a-dummy-approach-no-4bmm</link>
      <guid>https://forem.com/eskayml/introduction-to-machine-learning-a-dummy-approach-no-4bmm</guid>
      <description>&lt;p&gt;Have you ever wondered how you as a sentient human is able to identify images? ,  how you are able to tell the difference between looking a table and a chair , or a cat and a dog.&lt;/p&gt;

&lt;p&gt;What if you travel thousand of light years into space and meet aliens and make friends with them ,One day you're trying to explain the significant physical differences between a cat and a dog,   how would you do it?&lt;br&gt;
I guess you'd start from describing whiskers on the cat,more teeth on the dog , their relative sizes and probably the jaw shapes of both animals. Those characteristics are what you call "features".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2sa46f5ffm6fkycyt1z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2sa46f5ffm6fkycyt1z.jpeg" alt="Image description" width="163" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's probably assume you don't have a photo like this one you'd have to explain major features to differentiate these animals&lt;/p&gt;

&lt;p&gt;Now  , away from advanced lifeforms who could actually comprehend your explanation.&lt;br&gt;
Let's say you're trying to explain it to a computer,&lt;br&gt;
....Can you?, We know computers understand stuff in zeros and ones so how would we be able to even give instructions in the first place 😂.   Well,  hold up... Computer programmers are people who writes programs to make a computer perform a specific task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft27x6sa0m61ef8k8buvm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft27x6sa0m61ef8k8buvm.jpeg" alt="Image description" width="318" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's assume  that you're a computer programmer who commands a computer system. Now how can you explain the pressing issue of differentiating between a cat and a dog to a computer?Actually if you can it'll be an Herculean task having to hard-code all the pixels and their specific locations. &lt;br&gt;
Moreso what if you're presented with a different breed of dogs and cats to work with, what if the location of cats and dog in the image is changed or even worst-case scenario you are presented pictures of horses and now you have to re-adjust the entire program to meet those edge cases.&lt;/p&gt;

&lt;p&gt;Machine learning is way of finding patterns in data through feeding tons and tons of data to a computer system. From our last paragraphs problem, instead of we writing a program to differentiate these two animals , we could give the computer thousands of images of these two animals and with the help of some algorithms, find suitable patterns and differentiate to even human level capabilities between these two animals&lt;/p&gt;

&lt;p&gt;Machine learning could be applied not to only our problem but to many other problems , it could perform not just differentiating between images but differentiating between text , automated (self) driving cars and even recently ,generating images that have never existed before.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94p2wf2vqx88uf6z7fqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94p2wf2vqx88uf6z7fqp.png" alt="Image description" width="242" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above are just a few of the applications of machine learning , it possesses a lot of untapped potential and has a lot in store for the future&lt;/p&gt;

&lt;p&gt;Hey, check out this app I built to differentiate dogs from cats using neural networks , a form of machine learning.&lt;br&gt;
&lt;a href="https://github.com/eskayML/cat-and-dogs-classification" rel="noopener noreferrer"&gt;Cats-and-Dog-Classifier&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading 🙏&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>Getting started with machine learning on your Android device in 2022.</title>
      <dc:creator>Samuel Kalu</dc:creator>
      <pubDate>Sun, 02 Jan 2022 13:45:35 +0000</pubDate>
      <link>https://forem.com/eskayml/getting-started-with-machine-learning-on-your-android-device-in-2022-4clj</link>
      <guid>https://forem.com/eskayml/getting-started-with-machine-learning-on-your-android-device-in-2022-4clj</guid>
      <description>&lt;p&gt;Machine learning has been a tech buzz word for the past few years. Its simplest definition is that it basically leverages on the use of tons of data to make predictions.&lt;/p&gt;

&lt;p&gt;Well , in this article , I'm going to be showing you on how to get started using just your mobile phone(Android preferably).&lt;/p&gt;

&lt;p&gt;Firstly ,  install an app called termux from the Android PlayStore&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7lfekw6gh6fi7iiznen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7lfekw6gh6fi7iiznen.png" alt="Image description" width="480" height="960"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then open the app and run the following commands :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ apt update &amp;amp;&amp;amp; apt upgrade
$ apt install python
$ pip install jupyter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Definitely , the above commands will take several minutes to run and complete. When you are done with that, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ jupyter notebook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It then pops up a browser window.&lt;/p&gt;

&lt;p&gt;Click on the new button to create a new notebook and select python3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo517lw9tfv1vegf51zo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo517lw9tfv1vegf51zo2.png" alt="Image description" width="480" height="960"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congrats ,  you have Jupyter notebook up and running on your mobile.&lt;/p&gt;

&lt;p&gt;Wishing you a happy new year🥳 and also a productive 2022.&lt;br&gt;
I'll definitely be creating contents around machine learning and AI this year.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
