<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Youssof Naghibi</title>
    <description>The latest articles on Forem by Youssof Naghibi (@yn_ml).</description>
    <link>https://forem.com/yn_ml</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/yn_ml"/>
    <language>en</language>
    <item>
      <title>Machine Learning Guide (Part 1)</title>
      <dc:creator>Youssof Naghibi</dc:creator>
      <pubDate>Thu, 03 Apr 2025 00:55:21 +0000</pubDate>
      <link>https://forem.com/yn_ml/machine-learning-guide-part-1-27n9</link>
      <guid>https://forem.com/yn_ml/machine-learning-guide-part-1-27n9</guid>
      <description>&lt;h1&gt;
  
  
  Machine Learning Guide
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Quick Info
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Audience:&lt;/strong&gt; This guide is made for beginners with basic knowledge in Python programming.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; Introduction to this guide series.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Resources:&lt;/strong&gt; On my GitHub page you can download the whole guide as a PDF or find the links to all parts of this series.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PDF:&lt;/strong&gt; &lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/Machine_Learning_Blog.pdf" rel="noopener noreferrer"&gt;https://github.com/ynaghibi/BlogsResources/blob/main/Machine_Learning_Blog.pdf&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All Parts:&lt;/strong&gt; &lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/ML%20Guide%20Links" rel="noopener noreferrer"&gt;https://github.com/ynaghibi/BlogsResources/blob/main/ML%20Guide%20Links&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Last Edit:&lt;/strong&gt; 2025 April 03&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credits:&lt;/strong&gt; This guide is inspired by chapter 2 in "Hands on Machine Learning" by Aurélien Geron. I am in no way associated with the author himself. This guide does not replicate any parts of the book, and the code presented here is based on publicly available source codes (see Colab).&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I want to use this introduction briefly to explain how to &lt;em&gt;learn&lt;/em&gt; the basics of machine learning, because it can be quite intimidating for newcomers with little background knowledge. Even without much knowledge about Python, you can learn language on the fly by following this guide, but if you want more preparations, then you should get familiar with the most basic concepts (&lt;strong&gt;variables, lists, tuples, dictionaries, functions, loops, if-else-statements&lt;/strong&gt;). You will also encounter other concepts like &lt;strong&gt;lambda functions&lt;/strong&gt; or &lt;strong&gt;classes&lt;/strong&gt;, but our use cases are rather simple.&lt;/p&gt;

&lt;p&gt;You will probably find out that learning Python modules for machine learning or data scientists almost feels like learning a new language, anyway.&lt;/p&gt;

&lt;p&gt;For now you do not need much mathematical background except very simple &lt;strong&gt;school mathematics&lt;/strong&gt;. Of course more advanced topics require more knowledge (like basic linear algebra or probability theory), but as long as you do not intend to build your own machine learning tools, you can simply use the existing ones without knowing every mathematical or technical detail working under the hood.&lt;/p&gt;

&lt;p&gt;The best way to get a grasp on machine learning is to start with very practical books like &lt;strong&gt;"Hands on Machine Learning"&lt;/strong&gt; by Aurélien Geron, because they explain working source codes for real world examples. The alternative would be starting from scratch with very basic books, but you may not have time to learn every detail right from the beginning.&lt;/p&gt;

&lt;p&gt;Of course practical books can have a very steep learning curve, but if you use learning techniques like &lt;strong&gt;priming&lt;/strong&gt;, &lt;strong&gt;incubation&lt;/strong&gt;, and the &lt;strong&gt;24-hour rule&lt;/strong&gt; combined with practical coding you can get started with machine learning within just a few days or weeks. This means that you do should not try to memorize everything from the beginning, but rather skim through the working examples, and revisit the details later on, while experimenting with parts of the code. The more you repeat the first-skim-then-revisit-cycle the better you will get without wasting too much time on less important details.&lt;/p&gt;

&lt;p&gt;One way to soften the steep learning curve is to start with crash courses like this one. So without further ado, let us begin.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup and installations
&lt;/h2&gt;

&lt;p&gt;In order to run the Python code you only need a webbrowser, if you use Google's Colab. Aurélien Geron's source code used in his book is also publicly available on Colab, even though it may not be very beginner friendly.&lt;/p&gt;

&lt;p&gt;However, I recommend running everything locally on your computer for the ease of use. We will be using Visual Code, which has a lot of nice comfort functions that are probably not available on Colab. The only downside is that the initial installation takes a bit of effort and about 10 GB space in total.&lt;/p&gt;
&lt;h3&gt;
  
  
  Jupyter notebooks
&lt;/h3&gt;

&lt;p&gt;First you can install Visual Code and the Jupyter extension. Jupyter allows you to run certain parts of your code in any order you like. We will refer to these code parts as &lt;strong&gt;Jupyter cells&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="k"&gt;del&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you run a Jupyter cell, an interactive window will open in Visual Code, which shows you the outputs like numbers, arrays, tables or even plots.&lt;/p&gt;

&lt;p&gt;Note that the &lt;strong&gt;interactive window&lt;/strong&gt; may have a restart button, where all variables are reset, but this does not necessarily apply to module-level attributes like &lt;code&gt;__file__&lt;/code&gt;. In this case you have to close the interactive window, before you can safely run the cells from a new script file. Otherwise some problems might occur, where e.g. &lt;code&gt;__file__&lt;/code&gt; is the file directory of a previously executed python script instead of the current one. Even restarting Visual Code itself is not a substitute for starting a new interactive window.&lt;/p&gt;

&lt;p&gt;The reason, why Jupyter cells are useful, is that you can debug or modify the code without repeating previously computed time intensive cells. Of course you should be careful with this functionality. Sometimes it is better to restart the whole code from scratch before causing too much chaos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anaconda
&lt;/h3&gt;

&lt;p&gt;After installing Jupyter, your Python setup also needs the core modules required for machine learning. Instead of downloading them separately, you can install &lt;strong&gt;Anaconda&lt;/strong&gt;, which is widely used for data science, because it can handle module dependencies well. It should be also compatible with other Python based tasks that are not related to machine learning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you ever need to install missing modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; conda-forge xgboost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kaggle competitions
&lt;/h2&gt;

&lt;p&gt;One way to practice machine learning is to participate in Kaggle competitions. We will demonstrate this with a competition for beginners: &lt;a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/submissions" rel="noopener noreferrer"&gt;House Prices - Advanced Regression Techniques&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/submissions" rel="noopener noreferrer"&gt;House Prices Competition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ageron/handson-ml3/blob/main/INSTALL.md" rel="noopener noreferrer"&gt;Anaconda Install Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://colab.research.google.com/github/ageron/handson-ml3/blob/main/index.ipynb#scrollTo=-KAqK1NXk8Eu" rel="noopener noreferrer"&gt;Geron's Colab Notebooks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/KagglDataC1.py" rel="noopener noreferrer"&gt;Complete Code Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Aurélien Géron (2019). &lt;em&gt;Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Machine Learning Guide (Part 1)</title>
      <dc:creator>Youssof Naghibi</dc:creator>
      <pubDate>Thu, 03 Apr 2025 00:55:21 +0000</pubDate>
      <link>https://forem.com/yn_ml/machine-learning-guide-part-1-423j</link>
      <guid>https://forem.com/yn_ml/machine-learning-guide-part-1-423j</guid>
      <description>&lt;h1&gt;
  
  
  Machine Learning Guide
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Quick Info
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Audience:&lt;/strong&gt; This guide is made for beginners with basic knowledge in Python programming.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; Introduction to this guide series.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Resources:&lt;/strong&gt; On my GitHub page you can download the whole guide as a PDF or find the links to all parts of this series.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PDF:&lt;/strong&gt; &lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/Machine_Learning_Blog.pdf" rel="noopener noreferrer"&gt;https://github.com/ynaghibi/BlogsResources/blob/main/Machine_Learning_Blog.pdf&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All Parts:&lt;/strong&gt; &lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/ML%20Guide%20Links" rel="noopener noreferrer"&gt;https://github.com/ynaghibi/BlogsResources/blob/main/ML%20Guide%20Links&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Last Edit:&lt;/strong&gt; 2025 April 03&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credits:&lt;/strong&gt; This guide is inspired by chapter 2 in "Hands on Machine Learning" by Aurélien Geron. I am in no way associated with the author himself. This guide does not replicate any parts of the book, and the code presented here is based on publicly available source codes (see Colab).&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I want to use this introduction briefly to explain how to &lt;em&gt;learn&lt;/em&gt; the basics of machine learning, because it can be quite intimidating for newcomers with little background knowledge. Even without much knowledge about Python, you can learn language on the fly by following this guide, but if you want more preparations, then you should get familiar with the most basic concepts (&lt;strong&gt;variables, lists, tuples, dictionaries, functions, loops, if-else-statements&lt;/strong&gt;). You will also encounter other concepts like &lt;strong&gt;lambda functions&lt;/strong&gt; or &lt;strong&gt;classes&lt;/strong&gt;, but our use cases are rather simple.&lt;/p&gt;

&lt;p&gt;You will probably find out that learning Python modules for machine learning or data scientists almost feels like learning a new language, anyway.&lt;/p&gt;

&lt;p&gt;For now you do not need much mathematical background except very simple &lt;strong&gt;school mathematics&lt;/strong&gt;. Of course more advanced topics require more knowledge (like basic linear algebra or probability theory), but as long as you do not intend to build your own machine learning tools, you can simply use the existing ones without knowing every mathematical or technical detail working under the hood.&lt;/p&gt;

&lt;p&gt;The best way to get a grasp on machine learning is to start with very practical books like &lt;strong&gt;"Hands on Machine Learning"&lt;/strong&gt; by Aurélien Geron, because they explain working source codes for real world examples. The alternative would be starting from scratch with very basic books, but you may not have time to learn every detail right from the beginning.&lt;/p&gt;

&lt;p&gt;Of course practical books can have a very steep learning curve, but if you use learning techniques like &lt;strong&gt;priming&lt;/strong&gt;, &lt;strong&gt;incubation&lt;/strong&gt;, and the &lt;strong&gt;24-hour rule&lt;/strong&gt; combined with practical coding you can get started with machine learning within just a few days or weeks. This means that you do should not try to memorize everything from the beginning, but rather skim through the working examples, and revisit the details later on, while experimenting with parts of the code. The more you repeat the first-skim-then-revisit-cycle the better you will get without wasting too much time on less important details.&lt;/p&gt;

&lt;p&gt;One way to soften the steep learning curve is to start with crash courses like this one. So without further ado, let us begin.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup and installations
&lt;/h2&gt;

&lt;p&gt;In order to run the Python code you only need a webbrowser, if you use Google's Colab. Aurélien Geron's source code used in his book is also publicly available on Colab, even though it may not be very beginner friendly.&lt;/p&gt;

&lt;p&gt;However, I recommend running everything locally on your computer for the ease of use. We will be using Visual Code, which has a lot of nice comfort functions that are probably not available on Colab. The only downside is that the initial installation takes a bit of effort and about 10 GB space in total.&lt;/p&gt;
&lt;h3&gt;
  
  
  Jupyter notebooks
&lt;/h3&gt;

&lt;p&gt;First you can install Visual Code and the Jupyter extension. Jupyter allows you to run certain parts of your code in any order you like. We will refer to these code parts as &lt;strong&gt;Jupyter cells&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="k"&gt;del&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you run a Jupyter cell, an interactive window will open in Visual Code, which shows you the outputs like numbers, arrays, tables or even plots.&lt;/p&gt;

&lt;p&gt;Note that the &lt;strong&gt;interactive window&lt;/strong&gt; may have a restart button, where all variables are reset, but this does not necessarily apply to module-level attributes like &lt;code&gt;__file__&lt;/code&gt;. In this case you have to close the interactive window, before you can safely run the cells from a new script file. Otherwise some problems might occur, where e.g. &lt;code&gt;__file__&lt;/code&gt; is the file directory of a previously executed python script instead of the current one. Even restarting Visual Code itself is not a substitute for starting a new interactive window.&lt;/p&gt;

&lt;p&gt;The reason, why Jupyter cells are useful, is that you can debug or modify the code without repeating previously computed time intensive cells. Of course you should be careful with this functionality. Sometimes it is better to restart the whole code from scratch before causing too much chaos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anaconda
&lt;/h3&gt;

&lt;p&gt;After installing Jupyter, your Python setup also needs the core modules required for machine learning. Instead of downloading them separately, you can install &lt;strong&gt;Anaconda&lt;/strong&gt;, which is widely used for data science, because it can handle module dependencies well. It should be also compatible with other Python based tasks that are not related to machine learning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#%%
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you ever need to install missing modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; conda-forge xgboost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kaggle competitions
&lt;/h2&gt;

&lt;p&gt;One way to practice machine learning is to participate in Kaggle competitions. We will demonstrate this with a competition for beginners: &lt;a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/submissions" rel="noopener noreferrer"&gt;House Prices - Advanced Regression Techniques&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/submissions" rel="noopener noreferrer"&gt;House Prices Competition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ageron/handson-ml3/blob/main/INSTALL.md" rel="noopener noreferrer"&gt;Anaconda Install Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://colab.research.google.com/github/ageron/handson-ml3/blob/main/index.ipynb#scrollTo=-KAqK1NXk8Eu" rel="noopener noreferrer"&gt;Geron's Colab Notebooks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/KagglDataC1.py" rel="noopener noreferrer"&gt;Complete Code Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Aurélien Géron (2019). &lt;em&gt;Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Machine Learning Guide</title>
      <dc:creator>Youssof Naghibi</dc:creator>
      <pubDate>Tue, 25 Mar 2025 00:19:50 +0000</pubDate>
      <link>https://forem.com/yn_ml/machine-learning-guide-4fhp</link>
      <guid>https://forem.com/yn_ml/machine-learning-guide-4fhp</guid>
      <description>&lt;br&gt;
    Machine Learning Guide&lt;br&gt;
    &lt;br&gt;
    &lt;br&gt;
&lt;br&gt;


&lt;h1&gt;Basics&lt;/h1&gt;

&lt;h2&gt;Quick Info&lt;/h2&gt;

&lt;dl&gt;
  &lt;dt&gt;Audience:&lt;/dt&gt;
  &lt;dd&gt;This guide is made for beginners with basic knowledge in Python programming.&lt;/dd&gt;
  
  &lt;dt&gt;Goal:&lt;/dt&gt;
  &lt;dd&gt;Predict house sale prices in a Kaggle beginner competition (predicting house sale price) using machine learning libraries in Python.&lt;/dd&gt;
  
  &lt;dt&gt;Resources:&lt;/dt&gt;
  &lt;dd&gt;
    &lt;dl&gt;
      &lt;dt&gt;Kaggle:&lt;/dt&gt;
      &lt;dd&gt;&lt;a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/submissions" rel="noopener noreferrer"&gt;House Prices - Advanced Regression Techniques&lt;/a&gt;&lt;/dd&gt;
      
      &lt;dt&gt;Python Script (Main):&lt;/dt&gt;
      &lt;dd&gt;&lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/KagglC1.py" rel="noopener noreferrer"&gt;KagglC1.py&lt;/a&gt;&lt;/dd&gt;
      
      &lt;dt&gt;Python Script (Supplement):&lt;/dt&gt;
      &lt;dd&gt;&lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/KagglDataC1.py" rel="noopener noreferrer"&gt;KagglDataC1.py&lt;/a&gt;&lt;/dd&gt;
    &lt;/dl&gt;

      &lt;/dd&gt;
&lt;dt&gt;PDF version of this guide:&lt;/dt&gt;
      &lt;dd&gt;&lt;a href="https://github.com/ynaghibi/BlogsResources/blob/main/Machine_Learning_Blog.pdf" rel="noopener noreferrer"&gt;Machine_Learning_Blog.pdf&lt;/a&gt;&lt;/dd&gt;
    &lt;/dl&gt;
  
  &lt;dt&gt;Last Edit:&lt;/dt&gt;
  &lt;dd&gt;2025 March 25&lt;/dd&gt;
  
  &lt;dt&gt;Credits:&lt;/dt&gt;
  &lt;dd&gt;This guide is inspired by chapter 2 in &lt;em&gt;Hands on Machine Learning&lt;/em&gt; by Aurélien Geron. I am in no way associated with the author himself. This guide does not replicate any parts of the book, and the code presented here is based on publicly available source codes.&lt;/dd&gt;


&lt;h2&gt;Introduction&lt;/h2&gt;

&lt;p&gt;I want to use this introduction briefly to explain how to &lt;em&gt;learn&lt;/em&gt; the basics of machine learning, because it can be quite intimidating for newcomers with little background knowledge. Since we will require Python, you should get familiar with the most basic concepts (&lt;strong&gt;variables, lists, tuples, dictionaries, functions, loops, if-else-statements&lt;/strong&gt;). You will also encounter other concepts like &lt;strong&gt;lambda functions&lt;/strong&gt; or &lt;strong&gt;classes&lt;/strong&gt;, but our use cases are rather simple. Everything else can be learned on the fly. You will probably find out that learning Python libraries for machine learning or data science almost feels like learning a new language, anyway.&lt;/p&gt;

&lt;p&gt;For now you do not need much mathematical background except very simple &lt;strong&gt;school mathematics&lt;/strong&gt;. Of course more advanced topics require more knowledge (like basic linear algebra or probability theory), but as long as you do not intend to build your own machine learning tools, you can simply use the existing ones without knowing every mathematical or technical detail working under the hood.&lt;/p&gt;

&lt;p&gt;The best way to get a grasp on machine learning is to start with very practical books like &lt;strong&gt;&lt;em&gt;Hands on Machine Learning&lt;/em&gt;&lt;/strong&gt; by Aurélien Geron, because they explain working source codes for real world examples. The alternative would be starting from scratch with very basic books, but you may not have time to learn every detail right from the beginning.&lt;/p&gt;

&lt;p&gt;Of course practical books can have a very steep learning curve, but if you use learning techniques like &lt;strong&gt;priming&lt;/strong&gt;, &lt;strong&gt;incubation&lt;/strong&gt;, and the &lt;strong&gt;24-hour rule&lt;/strong&gt; combined with practical coding you can get started with machine learning within just a few days or weeks. This means that you do should not try to memorize everything from the beginning, but rather skim through the working examples, and revisit the details later on, while experimenting with parts of the code. The more you repeat the first-skim-then-revisit-cycle the better you will get without wasting too much time on less important details.&lt;/p&gt;

&lt;p&gt;One way to soften the steep learning curve is to start with crash courses like this one. So without further ado, let us begin.&lt;/p&gt;

&lt;h2&gt;Installation&lt;/h2&gt;

&lt;p&gt;In order to run the Python code you only need a webbrowser, if you use Google's Colab. Btw. Aurélien Geron's source code used in his book is also publicly available on Colab, even though it may not be very beginner friendly.&lt;/p&gt;

&lt;p&gt;However, I recommend running everything locally on your computer for the ease of use. We will be using Visual Code, which has a lot of nice comfort functions that are probably not available on Colab. The only downside is that the initial installation takes a bit of effort and about 10 GB space in total.&lt;/p&gt;

&lt;p&gt;First you can install Visual Code and the Jupyter extension. Jupyter allows you to run certain parts of your code in any order you like. We will refer to these code parts as &lt;strong&gt;Jupyter cells&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;All you need to do is create a Python script file with the .py ending, and open it in Visual Code. Each cell is separated by &lt;strong&gt;#%%&lt;/strong&gt; at the beginning of each line. If you want to see how it works, you can experiment with the following simple code.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
#%%

x = 3

#%%
print(x)

#%%
del x
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 1:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Next you need to install the Python libraries required for the basic machine learning tools. Instead of downloading them separately, you can install &lt;strong&gt;Anaconda&lt;/strong&gt;, although I will not guide you through the Anaconda installation. Instead you can follow e.g. &lt;a href="https://github.com/ageron/handson-ml3/blob/main/INSTALL.md" rel="noopener noreferrer"&gt;https://github.com/ageron/handson-ml3/blob/main/INSTALL.md&lt;/a&gt;. On Windows you may have to change a few system PATH settings, if the libraries are not detected by Visual Code, or if you have different versions of Python.&lt;/p&gt;

&lt;h2&gt;Kaggle competitions&lt;/h2&gt;

&lt;p&gt;One way to practice machine learning is to participate in Kaggle competitions. We will demonstrate this with a competition for beginners: &lt;a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/submissions" rel="noopener noreferrer"&gt;House Prices - Advanced Regression Techniques&lt;/a&gt;. There you can download the required data, which in this case is a rather small zip file.&lt;/p&gt;

&lt;h2&gt;How to read this guide&lt;/h2&gt;

&lt;p&gt;The source codes labeled as &lt;strong&gt;Jupyter Cell&lt;/strong&gt; can be pasted in one python file in order to create a new Jupyter cell each time. The code labeled as &lt;strong&gt;Test&lt;/strong&gt; is rather meant to deepen the understanding of the main code, but it is not required for any subsequent cell. There is also some code labeled as &lt;strong&gt;Output&lt;/strong&gt;, which just shows you the result of one of those cells.&lt;/p&gt;

&lt;h2&gt;Machine learning with python&lt;/h2&gt;

&lt;p&gt;We start by importing the libraries numpy, pandas and matplotlib (usually abbreviated as np, pd and plt) that will help us to explore the data.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
#%%

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 2:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;They provide three types of extensively used data containers.&lt;/p&gt;

&lt;dl&gt;
  &lt;dt&gt;Numpy Arrays&lt;/dt&gt;
  &lt;dd&gt;or ndarray typed variables are often used for numerical computations due to their better performance speed. The data is stored in n-dimensional arrays. E.g. a 1D array has the shape of a vector, a 2D array the shape of a matrix, etc. The type of data stored in any given ndarray must always be the same, but other than that the data can have any valid type.&lt;/dd&gt;
  
  &lt;dt&gt;Pandas DataFrames&lt;/dt&gt;
  &lt;dd&gt;are always 2 dimensional arrays, but in addition their rows and columns have a label as well. In this sense they are very similar to spreadsheets or SQL tables. Usually column lables describe the &lt;strong&gt;features&lt;/strong&gt; of the data, while the row labels describe the name of each &lt;strong&gt;sample&lt;/strong&gt; or instance. For this purpose dataframes can store data with different types, even when they are stored in the same dataframe variable.&lt;/dd&gt;
  
  &lt;dt&gt;Pandas Series&lt;/dt&gt;
  &lt;dd&gt;are the 1 dimensional version of dataframes. This means their row has a label, and they can contain mixed data types as well. Whenever we extract a single column from a dataframe, we can obtain a pandas series.&lt;/dd&gt;
&lt;/dl&gt;

&lt;p&gt;For beginners it can be helpful to print the type of these containers, because sometimes it can be hard to distinguish them.&lt;/p&gt;

&lt;p&gt;Next we need to read the required data before analyzing it. There are multiple ways how this can be done by letting Python automatically download and extract the data, or even use SQL directly in Jupyter cells in order to handle large datasets efficiently.&lt;/p&gt;

&lt;p&gt;However, we want to keep it simple for now. This means you can download and extract the &lt;strong&gt;train.csv&lt;/strong&gt; and &lt;strong&gt;test.csv&lt;/strong&gt; files manually from the Kaggle competition, and save them in a folder of your choice. In the source code below you need to insert the name of this folder in the variable &lt;strong&gt;sLocal_Folder_Path&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
#%%

from pathlib import Path
from IPython.display import display
sLocal_Folder_Path = "C:/Users/.../" #add your own folder name
housing = pd.read_csv(Path( sLocal_Folder_Path + "train.csv" ))
housing_test = pd.read_csv(Path( sLocal_Folder_Path + "test.csv" ))
display(housing)
display(housing_test)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 3:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Here we have stored the data as &lt;strong&gt;housing&lt;/strong&gt; and &lt;strong&gt;housing_test&lt;/strong&gt; dataframe variables.&lt;/p&gt;

&lt;p&gt;After running this cell you will see the table structure of &lt;strong&gt;housing&lt;/strong&gt; and &lt;strong&gt;housing_test&lt;/strong&gt;. Notice that we imported and used the &lt;strong&gt;display&lt;/strong&gt; function in order to create two display outputs in the interactive window from only one cell. The output should show you a few example rows from each dataframe.&lt;/p&gt;

&lt;p&gt;It should also tell you that &lt;strong&gt;housing&lt;/strong&gt; has 81 columns, while &lt;strong&gt;housing_test&lt;/strong&gt; has only 80 columns. The missing column are the housing sales prices that we will have to predict before submitting it on Kaggle. This means the data from &lt;strong&gt;housing&lt;/strong&gt; will be used to train our prediction models.&lt;/p&gt;

&lt;p&gt;One way to get a grasp on large datasets is to plot the histogram of all numerical features. The numerical values are grouped together in bins that are arranged along the x-axis. The bar length along the y-axis shows how many samples occured in each bin. The following code plots the histograms for each feature next to each other.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

housing.hist(bins=50, figsize=(30,25))
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 4:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;One important property to look for in these histograms are so called heavy-tailed distributions like the one on the left of the figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qv6asfkc81cbj2zccae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qv6asfkc81cbj2zccae.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2Fheavytail%2520example.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2Fheavytail%2520example.png" alt="Heavy-tailed features in the housing dataset (generated by deepseek)" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; Heavy-tailed features in the housing dataset (generated by deepseek)&lt;/p&gt;

&lt;p&gt;They can be usually converted to bell-shaped normal distribution by calculating the logarithmus of the numerical values along the x-axis. Normal distributions are handles much better by machine learning tools than heavy tailed distributions. For now we will not worry about transforming the data. Instead, we simply look for heavy-tailed features in the &lt;strong&gt;housing&lt;/strong&gt; data, and list them in &lt;strong&gt;heavy_tailed_features&lt;/strong&gt; in order to use it later.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

heavy_tailed_features = ["LotFrontage", "LotArea", "1stFlrSF", "TotalBsmtSF", "GrLivArea"]
housing[heavy_tailed_features].hist(bins=50, figsize=(12,8))
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 5:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;The output of the cell above shows the heavy-tailed features only as demonstrated in the following figure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o29h5ckmkmk03dtk9dk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o29h5ckmkmk03dtk9dk.png" alt="Image description" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2FHousing%2520heavy%2520tailed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2FHousing%2520heavy%2520tailed.png" alt="Heavy-tailed vs. normal distribution" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Heavy-tailed vs. normal distribution&lt;/p&gt;

&lt;p&gt;Of course there are non-numerical features in the housing dataset as well. Fortunately, there is a file data_description.txt from the zip-file we downloaded earlier on Kaggle. If you open it in a text-editor, you can see a description of all features like e.g.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

ExterQual: Evaluates the quality of the material on the exterior
     Ex  Excellent
     Gd  Good
     TA  Average/Typical
     Fa  Fair
     Po  Poor
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 6:&lt;/strong&gt; Snippet from data_description.txt&lt;/p&gt;

&lt;p&gt;Thanks to this description we can try to convert the feature &lt;strong&gt;ExterQual&lt;/strong&gt; into a numerical one. It may seem curious at first, but since evaluations like &lt;strong&gt;Excellent&lt;/strong&gt; or &lt;strong&gt;Poor&lt;/strong&gt; are based on human estimations, it makes sense to use fibonnaci-numbers rather than linearly increasing numbers in order to map this kind of features to integer numbers. The reason behind this is that the difference between two adjacent numbers in a fibonnaci sequence is increasing based on the previous number in the sequence. Otherwise humans would have a harder time to distinguish between them (for more details see e.g. &lt;cite&gt;Scrum&lt;/cite&gt;).&lt;/p&gt;

&lt;p&gt;You may still be sceptical whether it makes sense to use fibonnaci-numbers for all non-numerical features, but for the sake of this guide we want to see a simple technique to produce numerical values. In addition, we will see that the numerical value of &lt;strong&gt;ExterQual&lt;/strong&gt; is pretty good for predicting house sales prices.&lt;/p&gt;

&lt;p&gt;Of course there are also features, where it does not make much sense to convert them to any number like for e.g. the roof material in the house data. They will stay untouched for now.&lt;/p&gt;

&lt;p&gt;For our feature-to-number-mapping we can use dictionaries in Python. Since putting all feature-mapping dictionaries in one .py file would clutter the code, we store them in a file &lt;strong&gt;KagglDataC1.py&lt;/strong&gt; instead, and put this file in the same folder as our main Python script.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

# ExterQual: Evaluates the quality of the material on the exterior
fibonacci_mapping_ExterQual = {
    "Po" : 1,    # Poor
    "Fa" : 2,   # Fair
    "TA" : 3,   # Average/Typical
    "Gd" : 5,  # Good
    "Ex" : 8,  # Excellent
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 7:&lt;/strong&gt; Snippet from KagglDataC1.py&lt;/p&gt;

&lt;p&gt;The complete file can be found on my Github page (see &lt;cite&gt;KagglData.py&lt;/cite&gt;), but you can also create it yourself with the methods I showed you.&lt;/p&gt;

&lt;p&gt;Now we can go back to our main Python file, and add the following cell.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

from KagglDataC1 import *
ranked_category_columns = ["BsmtQual", "BsmtCond", "BsmtExposure", 
    "BsmtFinType1", "BsmtFinType2", "HeatingQC", "KitchenQual", 
    "Functional", "FireplaceQu", "GarageFinish", "GarageQual",
    "GarageCond", "PavedDrive", "PoolQC", "Fence", "ExterCond", "ExterQual"]
def transform_categories_to_ranked(data):
    for col in ranked_category_columns:
        data[f"Ranked_{col}"] = data[col].map(globals()[f"fibonacci_mapping_{col}"])
    data = data.drop(columns=ranked_category_columns)
    return data
housing = transform_categories_to_ranked(housing)
housing_test = transform_categories_to_ranked(housing_test)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 8:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;This imports the contents of the KagglDataC1.py file, automatically converts features like &lt;strong&gt;ExterQual&lt;/strong&gt; to a numerical value, adds them as new features in the dataframes &lt;strong&gt;housing&lt;/strong&gt; and &lt;strong&gt;housing_test&lt;/strong&gt;, and deletes the columns of the original non-numerical features. Here we chose to add the prefix &lt;strong&gt;Ranked_&lt;/strong&gt; to the new features in order to distinguish from the old ones.&lt;/p&gt;

&lt;p&gt;Of course we should check, whether the modified housing dataframes are correct. One way to do that is to run&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

display(housing_test.info())
display(housing.info())
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 9:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;which uses the &lt;strong&gt;.info()&lt;/strong&gt; method for showing a list of all features (i.e. columns) of our housing dataframes together with their data type and the amount of instances for each feature.&lt;/p&gt;

&lt;p&gt;E.g. &lt;strong&gt;int64&lt;/strong&gt; stands for integers, &lt;strong&gt;float64&lt;/strong&gt; for float values, whereas &lt;strong&gt;object&lt;/strong&gt; indicates a non-numerical feature type.&lt;/p&gt;

&lt;p&gt;If an output like the one from &lt;strong&gt;.info()&lt;/strong&gt; is too large, you can still display the whole output by clicking on &lt;strong&gt;scrollable element&lt;/strong&gt; in Visual Code. If everything went fine, then it should look like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oljxrjyh0cwth4gbvy9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oljxrjyh0cwth4gbvy9.jpg" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2Fvisual%2520code%2520screen%2520housing%2520info.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2Fvisual%2520code%2520screen%2520housing%2520info.jpg" alt="Visual Code showing housing info" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 3:&lt;/strong&gt; Visual Code showing housing info&lt;/p&gt;

&lt;p&gt;If there is no option to show the complete output, then it is probably because there are too many columns/features in your dataset. Fortunately you can fix this by changing the settings with a code like&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

pd.set_option('display.max_info_columns', 250)
pd.set_option('display.max_rows', 250)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 10:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;For the output of &lt;strong&gt;.info()&lt;/strong&gt; the setting &lt;strong&gt;max_info_columns&lt;/strong&gt; is enough, but we also want to set &lt;strong&gt;max_rows&lt;/strong&gt; for other outputs as well (e.g. when we display the correlation matrix later on).&lt;/p&gt;

&lt;p&gt;Another way of looking at the housing dataset is to run&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

housing.describe()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 11:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;which will show you information like the mean-value or the minimum/maximum values of all numerical features in the &lt;strong&gt;housing&lt;/strong&gt; dataframe.&lt;/p&gt;

&lt;p&gt;The set of numerical features can be further divided in continuous and discrete ones. The discrete features are the ones, where there is only a limited and rather small set of possible values for all samples of this feature. E.g. the housing feature &lt;strong&gt;OverallQual&lt;/strong&gt; has only integer values from 1 to 10. We can see this by running the following code&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

(housing["OverallQual"]).value_counts().sort_index()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;which results in the output&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
OverallQual

1      2
2      3
3     20
4    116
5    397
6    374
7    319
8    168
9     43
10    18
Name: count, dtype: int64
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see there are only 2 hourses with a terrible &lt;strong&gt;OverallQual&lt;/strong&gt; of 1. Now is a good time to use these methods to explore some of the other features on your own before moving on. Maybe you can find some interesting observations. E.g. you may want to look at how many kitchen or other types of rooms the house samples have.&lt;/p&gt;

&lt;p&gt;Later on we will modify the housing dataset for temporary purposes (e.g. stratified samples). This is why we want to make a copy of its current version, so we can undo the changes. Note that copying the dataframe is not the same as using a simple dataframe variable assignment, which would simply pass it per reference instead of passing it per value.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

housing_original = housing.copy()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 12:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Before we modify &lt;strong&gt;housing&lt;/strong&gt;, we should look at the so called &lt;strong&gt;standard correlation coefficients&lt;/strong&gt; between each feature. Without going into too much details, the correlation measures the linear dependency between two features. If a more complex dependency exists, or if there is no dependency at all, then the correlation should be close to 0. Otherwise it will be either close to +1 or to -1. Of course it can be possbile to transform non-linear dependencies into linear ones before measuring the correlations, but we will not dive too deep into this topic.&lt;/p&gt;

&lt;p&gt;For now it is enough to know that these correlations can be easily obtained from the &lt;strong&gt;correlation matrix&lt;/strong&gt;. Since we are very much interested in the correlation of the sale price and every other feature, we only have to look up the &lt;strong&gt;SalePrice&lt;/strong&gt; column of the correlation matrix as demonstrated in the code below.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

corr_matrix = housing.corr(numeric_only = True)
corr_matrix["SalePrice"].sort_values(ascending = False)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 13:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;If you run this cell, you should get a list of the correlations of each future (with respect to the house sale price). It should not surprise you too much that &lt;strong&gt;SalePrice&lt;/strong&gt; has a correlation of 1.0 with itself.&lt;/p&gt;

&lt;p&gt;More interesting examples are &lt;strong&gt;OverallQual&lt;/strong&gt;, which has a high correlation of 0.79. There are also features we derived from the fibonnaci-numbers with a relatively high correlation (e.g. &lt;strong&gt;Ranked_ExterQual&lt;/strong&gt; has 0.69 and &lt;strong&gt;Ranked_KitchenQual&lt;/strong&gt; has 0.68).&lt;/p&gt;

&lt;p&gt;If the absolute value of the correlations is high, then it is a good indicator that we have found an important feature. The advantage of this method is that is quite easy to find those features.&lt;/p&gt;

&lt;p&gt;Of course it may be also possible that we miss some of the other important features, if their correlation is close to 0. In those cases we would need to use more sophisticated analysis methods.&lt;/p&gt;

&lt;h3&gt;Creating new features&lt;/h3&gt;

&lt;p&gt;Our next goal is to create completely new features based on some of the old ones in order to gain more meaningful information for predicting house sale prices. This means we have to make some educated guesses that are tailored more specifically to the concrete problem (in this case predicting house prices) instead of using very general methods like our previous fibonnaci-mapping. Here are some thoughts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;There are features in the housing dataset, which count the amount of full bathrooms &lt;strong&gt;FullBath&lt;/strong&gt; (including a shower) and &lt;strong&gt;HalfBath&lt;/strong&gt; (i.e. toilets only) separately. Furthermore, these numbers do not take into account the baths in the basement (&lt;strong&gt;BsmtFullBath&lt;/strong&gt; and &lt;strong&gt;BsmtHalfBath&lt;/strong&gt;). By computing the sum of these four features, we get a more meaningful number of the total amount of bathrooms.&lt;/li&gt;
  &lt;li&gt;The total area &lt;strong&gt;GrLivArea&lt;/strong&gt; of living space (basement not included) is already an important feature, but if we multiply it with &lt;strong&gt;OverallQual&lt;/strong&gt; we may get an even more meaningful feature.&lt;/li&gt;
  &lt;li&gt;The fibonnaci-ranked features &lt;strong&gt;Ranked_PavedDrive&lt;/strong&gt;, &lt;strong&gt;GarageFinish&lt;/strong&gt;, &lt;strong&gt;Ranked_GarageQual&lt;/strong&gt;, &lt;strong&gt;GarageCars&lt;/strong&gt; and &lt;strong&gt;GarageArea&lt;/strong&gt; probably have a positive impact on each other. Therefore, it can make sense to compute their product.&lt;/li&gt;
  &lt;li&gt;The same is probably true for the quality of the heating &lt;strong&gt;Ranked_HeatingQC&lt;/strong&gt; and the total amount of rooms &lt;strong&gt;TotRmsAbvGrd&lt;/strong&gt; (basement not included).&lt;/li&gt;
  &lt;li&gt;Ratios like the amount of bedrooms per living area, or the amount of toilets per bedroom can also lead to important new features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can translate these ideas in the code below. Note that when we compute the ratios, we have to be careful not to divide by 0. In our example we solve this problem by checking whether the feature of a given sample is 0, and then provide an alternative feature that is guaranteed to have a different value than 0.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

housing["bath_sum"] = \
    housing["FullBath"] + housing["HalfBath"] \
    + housing["BsmtFullBath"] + housing["BsmtHalfBath"]
housing["areaquality_product"] = housing["GrLivArea"] * housing["OverallQual"]
housing["garage_product"] = housing["Ranked_PavedDrive"] \
    * housing["Ranked_GarageFinish"] * housing["Ranked_GarageQual"] \
    * housing["GarageCars"] * housing["GarageArea"]
housing["bedrooms_ratio"] = housing["BedroomAbvGr"] / housing["GrLivArea"]
housing["roomquality_product"] = housing["Ranked_HeatingQC"] \
    * housing["TotRmsAbvGrd"]
housing["bath_kitchen_ratio"] = np.where(
    housing["KitchenAbvGr"] != 0,  # Condition
    (housing["bath_sum"]) / housing["KitchenAbvGr"],  # True: Perform division
    (housing["bath_sum"]) / housing["TotRmsAbvGrd"]
)
housing["bath_bedroom_ratio"] = np.where(
    housing["BedroomAbvGr"] != 0,  # Condition
    (housing["bath_sum"]) / housing["BedroomAbvGr"],  # True: Perform division
    (housing["bath_sum"]) / housing["TotRmsAbvGrd"]
)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 14:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Afterwards we can quickly check how the correlations of the new features look like compared to the old ones.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

corr_matrix = housing.corr(numeric_only = True)
corr_matrix["SalePrice"].sort_values(ascending = False)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 15:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;The new output tells us e.g. that &lt;strong&gt;areaquality_product&lt;/strong&gt; has the highest correlation of any other feature, which is already an improvement. We can also see that &lt;strong&gt;garage_product&lt;/strong&gt; has a significantly higher correlation than any of its factors.&lt;/p&gt;

&lt;p&gt;We could go further by dropping the old features after replacing them with better ones, or use advanced techniques like the principal component analysis (PCA), but for the sake of keeping this guide simple we will leave the old and new features as they are right now.&lt;/p&gt;

&lt;p&gt;If we do not want to rely too much on the correlation coefficients, we can also use the so called &lt;strong&gt;scatter matrix&lt;/strong&gt;, where each feature is plotted against each other. This can help to find out non-linear dependencies or clusters. Of course we can also find non-existing dependencies, if the plotted points are mostly aligned around a vertical or horizontal line the plot.&lt;/p&gt;

&lt;p&gt;Note that plotting a feature against itself does not result in any interesting plot, which is why they are replaced by their corresponding histogram.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

from pandas.plotting import scatter_matrix
attributes = ["SalePrice", "garage_product", "areaquality_product", 
    "roomquality_product", "bedrooms_ratio"]
scatter_matrix(housing[attributes], figsize=(10, 10))
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 16:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;As a result we obtain the following plot. It shows e.g. how our bedrooms ratio has a non-linear dependency with other features like the sale price.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn4kmzrwewhku7jel29h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn4kmzrwewhku7jel29h.png" alt="Image description" width="800" height="795"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2FScatterMatrix" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/images%2FScatterMatrix" alt="Scatter matrix" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 4:&lt;/strong&gt; Scatter matrix&lt;/p&gt;

&lt;h3&gt;Strata&lt;/h3&gt;

&lt;p&gt;Before we can apply prediction models on our data, we have to test its performance. Of course we cannot test the performance very easily on the dataset of &lt;strong&gt;housing_test&lt;/strong&gt;, because it is missing the sale prices. Therefore, we can only use &lt;strong&gt;housing&lt;/strong&gt; for training and testing the quaility of our predictions.&lt;/p&gt;

&lt;p&gt;For this purpose it is important to split the &lt;strong&gt;housing&lt;/strong&gt; data into a training and a test set. Otherwise we would test its performance on the same set, where our prediction model has been trained. This can lead to a false overconfidence for our prediction model, which is probably going to fail on unknown data.&lt;/p&gt;

&lt;p&gt;E.g. one common problem is overfitting, where a prediction model attempts to fit so many parameters that it can predict the outcome of the training set very well or even without any errors, but when it is applied to a different dataset it suddenly performs very poorly.&lt;/p&gt;

&lt;p&gt;So now we have to find a good way to split the &lt;strong&gt;housing&lt;/strong&gt; data. One possible way is use a random split, where hashing functions can help to consistently split the data, even if new data is added to the training set some time in the future.&lt;/p&gt;

&lt;p&gt;However, in our case we do not want a random split in order to avoid that certain categories in important features are not under- or overrepresented in the training vs. the testing data. E.g. if we want to predict the performance of a new drug, then we want to represent the ages of all patients as evenly as possible. Of course we would have to group different ages together, i.e. we could divide the ages into different bins for the ages 0-10, 10-20, 30-40, etc.&lt;/p&gt;

&lt;p&gt;This method is called &lt;strong&gt;stratified sampling&lt;/strong&gt;. In our case we have to make an educated guess to find good strata for predicting house prices. E.g. we could use the &lt;strong&gt;areaquality&lt;/strong&gt; and the &lt;strong&gt;bedrooms_ratio&lt;/strong&gt; as stratas. For this purpose we use the function &lt;strong&gt;pd.cut&lt;/strong&gt;, where the &lt;strong&gt;bins&lt;/strong&gt; define each strata group and label them with the function argument &lt;strong&gt;labels&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

strata_cat_1 = pd.cut(
    housing["areaquality_product"],
    bins = [0, 5e3, 8e3, 12e3, np.inf],
    labels = [1,2,3,4],
    include_lowest = True
)
strata_cat_2 = pd.cut(
    housing["bedrooms_ratio"],
    bins = [0.0, 16e-4, 21e-4, np.inf],
    labels = [1,2,3],
    include_lowest = True
)
strata_cat_1.value_counts().sort_index().plot.bar(grid = True)
plt.show()
strata_cat_2.value_counts().sort_index().plot.bar(grid = True)
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 17:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Also note that we used the argument &lt;strong&gt;include_lowest&lt;/strong&gt; in order to prevent the case, where the edge cases are not included, i.e. if e.g. the &lt;strong&gt;bedrooms_ratio&lt;/strong&gt; is 0 for a given sample. then &lt;strong&gt;pd.cut&lt;/strong&gt; would convert it into &lt;strong&gt;NaN&lt;/strong&gt; (not any number), because the &lt;strong&gt;bins&lt;/strong&gt; start at 0, which is the outermost left edge.&lt;/p&gt;

&lt;p&gt;The resulting plots show us the histograms for both strata features. Just to make sure that &lt;strong&gt;pd.cut&lt;/strong&gt; did not create any &lt;strong&gt;NaN&lt;/strong&gt; values, we can check this quickly with the following code.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

has_nan = strata_cat_2.isna().any().any()  # Returns True if any NaN exists
print("Does the DataFrame contain NaN values?", has_nan)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One problem we have not addressed, yet, is that we can split the housing dataset only with respect to one feature. If we wanted to combine two features, then we would have to use the following trick.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

sStrataCat = "Strata_Cat"
housing[sStrataCat] = strata_cat_1.astype(str) + "_" + strata_cat_2.astype(str)
housing[sStrataCat].value_counts().sort_index().plot.bar(grid = True)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 18:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Here we simply concatenate the strings of the labels of each strata feature, and use the resulting string as labels for the combined stata feature. The only problem is that some of the combined strata have very view samples, which could lead to a distortion of the prediction models or even to runtime errors.&lt;/p&gt;

&lt;p&gt;In order to avoid this, we can put all underrepresented strata into a new stratum called &lt;strong&gt;Other&lt;/strong&gt;, which covers the miscellaneous cases. Here the integer &lt;strong&gt;iMinCounts&lt;/strong&gt; defines the minimum amount of samples each strata needs to have without being merged with the &lt;strong&gt;Other&lt;/strong&gt; stratum.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

iMinCounts = 100
housing_stratacat_counts = housing[sStrataCat].value_counts()
indices_of_small_housing_stratacat_counts = housing_stratacat_counts[
    housing_stratacat_counts &amp;lt; iMinCounts
].index
housing[sStrataCat] = housing[sStrataCat].apply(
    lambda x: 'Other' if x in indices_of_small_housing_stratacat_counts else x
)
housing[sStrataCat].value_counts().sort_index().plot.bar(grid = True)
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 19:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;The resulting histogram looks much better now, after the small strata have vanished.&lt;/p&gt;

&lt;p&gt;Now we that we have obtained the desired strata, we can revert the housing dataset back to the previous ones. In order split the data into a training and test set with respect to the strata we can use the library function &lt;strong&gt;train_test_split&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

from sklearn.model_selection import train_test_split
housing_strata_category = housing[sStrataCat].copy()
housing = housing_original
strat_train_set, strat_test_set = train_test_split(
    housing, test_size = 0.15, stratify = housing_strata_category, random_state = 42
)
housing = strat_train_set.drop("SalePrice", axis=1)
housing_labels = strat_train_set["SalePrice"].copy()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 20:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;h3&gt;Pipelines&lt;/h3&gt;

&lt;p&gt;Before we can apply prediction models, we still have to make an important step. Pipelines are very helpful to transform the original dataset in a chain of several transformations. Even though we have already seen how transformations can be done manually (when we created new features), pipelines give us more control over the dataset.&lt;/p&gt;

&lt;p&gt;E.g. pipelines allow us to try different variations of these transformations, and to even automate this process. Before we get to this point, we need to import the required library classes and functions from sklearn.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.compose import ColumnTransformer
from sklearn.compose import make_column_selector
from sklearn.preprocessing import FunctionTransformer
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator, TransformerMixin
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 21:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Then we define the features for the housing data that will be used to creat the new featurs.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

list_trafo_columns = [
    "FullBath", "HalfBath", "BsmtFullBath", "BsmtHalfBath",
    "GrLivArea", "OverallQual",
    "Ranked_PavedDrive", "Ranked_GarageFinish", "Ranked_GarageQual", "GarageCars", "GarageArea",
    "BedroomAbvGr", "GrLivArea",
    "Ranked_HeatingQC", "GrLivArea",
    "KitchenAbvGr", "BedroomAbvGr", "TotRmsAbvGrd",
]

inverse_list_trafo_columns = {
    value: index for index, value in enumerate(list_trafo_columns)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 22:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;These features are all listed in &lt;strong&gt;list_trafo_columns&lt;/strong&gt;. We will see in a moment, why we also need the inversion of this list, i.e. the dictionary &lt;strong&gt;inverse_list_trafo_columns&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next we need to define a class that can transform the dataset features, i.e. we need a column transformer. This transformer is initialized by the &lt;strong&gt;__init__&lt;/strong&gt; function as demonstrated in the code below. Here &lt;strong&gt;__init__&lt;/strong&gt; allows us to use arguments like &lt;strong&gt;sum&lt;/strong&gt; which can contain the column names of the features that will be added together (like e.g. when adding the amount of baths, half baths, etc.). Similarly, we can define the features for the products, for the denominator (in case we need a fraction for ratios), and an alternative denominator (in case one sample the orignal denominator column is 0).&lt;/p&gt;

&lt;p&gt;Furthermore, we want to allow some variations to the column transformation. In order to keep the example a bit simpler, we only allow variations for the way the sum is computed with respect to some weights (&lt;strong&gt;sumweights&lt;/strong&gt;). E.g. it may be possible that half baths get a smaller weight than full baths, or that basement baths have a smaller weight than baths above the ground.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

class ColumnFormulaTransformer(BaseEstimator, TransformerMixin):
    def __init__(self,
            sum = [], product = [], denominator = [], altdenominator = [],
            sumweights = []
        ):
        self.sum = sum
        self.product = product
        self.denominator = denominator
        self.altdenominator = altdenominator
        self.sumweights = sumweights
    def fit(self, X, y=None, sample_weight=None):
        self.n_features_in_ = X.shape[1]
        return self
    def transform(self, X):
        assert self.n_features_in_ == X.shape[1]
        #calculate nominator:
        numerator = np.zeros(X.shape[0])
        for id, col in enumerate(self.sum):
            if id &amp;lt; len(self.sumweights):
                weight = self.sumweights[id]
            else:
                weight = 1
            numerator += X[:,inverse_list_trafo_columns[col]] * weight
        if self.product:
            prodnumerator = np.ones(X.shape[0])
            for col in self.product:
                prodnumerator *= X[:,inverse_list_trafo_columns[col]]
            numerator += prodnumerator
        #calculate denominator:
        if self.denominator:
            denominator = X[:, inverse_list_trafo_columns[self.denominator[0]]]
            if self.altdenominator:
                altdenominator = \
                    X[:, inverse_list_trafo_columns[self.altdenominator[0]]]
                denominator[denominator == 0] = altdenominator[denominator == 0]
            result = numerator / denominator
        else:
            result = numerator
        return result.reshape(-1, 1) #convert result from 1D to 2D NumPy array
    def get_feature_names_out(self, names=None):
        return ["formula"]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 23:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;This may be a bit overwhelming, but will help to understand the versatiliy of the column transformer method. Another important part of transforming the data is making sure that sample with missing features are handled correctly. One solution is to replace the missing values with a median value (works for numerical features only), or to replace them by the most frequent one (which also works for categorical features).&lt;/p&gt;

&lt;p&gt;Replacing missing features is handled relatively simply with the &lt;strong&gt;SimpleImputer&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

def make_pipeline_with_formula(
        sum = [], product = [],
        denominator = [], altdenominator = []
    ):
    return Pipeline([
        ("imputer", SimpleImputer(strategy="median")),
        (
            "ratio",
            ColumnFormulaTransformer(
                sum = sum, product = product,
                denominator = denominator, altdenominator = altdenominator
            )
        ),
        ("scaler", StandardScaler())
    ])
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 24:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;At the botton of this pipeline you can also see the &lt;strong&gt;StandardScaler&lt;/strong&gt;, which is another transformer that center the sample values around the mean value and normalizes the variance. You do not have to understand the details here, but you should know that machine learning algorithms can perform better, if the numerical values are transformed by the standard scaler. Of course we need to apply the standard scaler as the last step in order make use of this advantage.&lt;/p&gt;

&lt;p&gt;Before we insert the arguments for the column transformer, it is much more convenient to arrange them in &lt;strong&gt;ColumnTransformer_TupleList&lt;/strong&gt;. You will see why in a moment.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

bathsum_list = ["FullBath", "HalfBath", "BsmtFullBath", "BsmtHalfBath"]
ColumnTransformer_TupleList = [
    ("bath", make_pipeline_with_formula(
            sum = bathsum_list
        ), list_trafo_columns
    ),
    ("areaquality", make_pipeline_with_formula(
            product = ["GrLivArea", "OverallQual"]
        ), list_trafo_columns
    ),
    ("garage", make_pipeline_with_formula(
            product = ["Ranked_PavedDrive", "Ranked_GarageFinish",
                    "Ranked_GarageQual", "GarageCars", "GarageArea"]
        ), list_trafo_columns
    ),
    ("bedroom", make_pipeline_with_formula(
            product = ["BedroomAbvGr"], denominator = ["GrLivArea"]
        ), list_trafo_columns
    ),
    ("roomquality", make_pipeline_with_formula(
            product = ["Ranked_HeatingQC", "TotRmsAbvGrd"]
        ), list_trafo_columns
    ),
    ("bath_kitchen", make_pipeline_with_formula(
            sum = bathsum_list,
            denominator = ["KitchenAbvGr"],
            altdenominator = ["TotRmsAbvGrd"]
        ), list_trafo_columns
    ),
    ("bath_bedroom", make_pipeline_with_formula(
            sum = bathsum_list,
            denominator = ["BedroomAbvGr"],
            altdenominator = ["TotRmsAbvGrd"]
        ), list_trafo_columns
    ),
]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 25:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;We also have to set up the simple pipelines. E.g. one of them manages the logarithmus-transformations of the heavy-tailed features. Then we can add them to our &lt;strong&gt;ColumnTransformer_TupleList&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

def safe_log(x):
    return np.log(np.where(x &amp;lt;= 0, 1e-10, x))

log_pipeline = make_pipeline(
    SimpleImputer(strategy = "median"),
    #FunctionTransformer(np.log, feature_names_out = "one-to-one"),
    FunctionTransformer(safe_log, feature_names_out = "one-to-one"),
    StandardScaler()
)

cat_pipeline = make_pipeline(
    SimpleImputer(strategy="most_frequent"),
    OneHotEncoder(handle_unknown="ignore")
)

default_num_pipeline = make_pipeline(
    SimpleImputer(strategy = "median"),
    StandardScaler()
)

ColumnTransformer_TupleList.extend([
    ("log", log_pipeline, heavy_tailed_features),
    ("cat", cat_pipeline, make_column_selector(dtype_include = object)),
])
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 26:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;You may wonder what &lt;strong&gt;OneHotEncoder&lt;/strong&gt; is good for. It takes categorical (i.e. non-numerical) features and assigns each category to a new feature, where they can have only 0 or 1 as a numerical value. In this way the categories are not mixed up. Otherwise the prediction models could assume that there was a numerical order between two different categories, even though that would not make any sense. Therefore, each category can have only 1 as a value in one of the features and 0 in any other feature, which gives the one hot encode its name.&lt;/p&gt;

&lt;h3&gt;Prediction models&lt;/h3&gt;

&lt;p&gt;Now we can finally use prediction models in combination with out pipeline. For now you do not need to know how these models work. There are many different models, but the way they are handled in the Python code is quite similar. For our purposes the so called &lt;strong&gt;RandomForestRegressor&lt;/strong&gt; is quite good.&lt;/p&gt;

&lt;p&gt;However, prediction models cannot optimize all variables by themselves. These variables are called &lt;strong&gt;hyperparameters&lt;/strong&gt;. For e.g. the sum weights we discussed earlier can be seen as hyperparameters. We do not need to set them ourselves. Instead this is managed by &lt;strong&gt;RandomizedSearchCV&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform, randint

full_pipeline = Pipeline([
    ("preprocessing", preprocessing),
    ("random_forest", RandomForestRegressor(random_state=42)),
])

class CustomSumweightsSampler:
    def rvs(self, random_state=None):
        #["FullBath", "HalfBath", "BsmtFullBath", "BsmtHalfBath"]
        return [1.0, uniform(0.0, 1.0).rvs(random_state=random_state),
                uniform(0.0, 1.0).rvs(random_state=random_state),
                uniform(0.0, 1.0).rvs(random_state=random_state)]

param_distribs = {
    'preprocessing__bath_kitchen__ratio__sumweights': CustomSumweightsSampler(),
    'preprocessing__bath_bedroom__ratio__sumweights': CustomSumweightsSampler(),
    'preprocessing__bath__ratio__sumweights': CustomSumweightsSampler(),
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 27:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;The way it works, that the hyperparameters are randomly generated until the best set of variations is found. In order to evaluate each variation, the &lt;strong&gt;RandomizedSearchCV&lt;/strong&gt; splits the dataset with respect to the labels (in this case the house price sales).&lt;/p&gt;

&lt;p&gt;This should not be confused with the strata we discussed earlier, because when the model performance is evaluated, only the labels really matter for the strata. The method used here is called &lt;strong&gt;cross validation&lt;/strong&gt;. It subdivides the training sets in $k$ many subsets, removes one of them, uses the remaining $k-1$ subsets to train the model and uses the previously removed subset to evaluate the performance of the prediction model. The amount of subdivision is controlled by the function argument &lt;strong&gt;cv&lt;/strong&gt; for &lt;strong&gt;RandomizedSearchCV&lt;/strong&gt; as demonstrated below.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

rnd_search = RandomizedSearchCV(
    full_pipeline,
    param_distributions = param_distribs,
    n_iter=15,
    cv=10,
    scoring='neg_root_mean_squared_error',
    random_state=42
)
rnd_search.fit(housing, housing_labels)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 28:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Then one can see the test results for the resulting errors of the prediction model with respect to the cross validation.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

cv_rmse_scores = -rnd_search.cv_results_['mean_test_score']
rmse_summary = pd.Series(cv_rmse_scores).describe()
rmse_summary
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 29:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;Finally, we can use the prediction model to predict the labels (i.e. the sale prices) of the dataset, where the labels are unknown.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# %%

housing_predicted_prices = rnd_search.predict(housing_test)
submission = pd.DataFrame({
    'Id': housing_test['Id'],
    'SalePrice': housing_predicted_prices
})
submission.to_csv(sLocal_Folder_Path + '/submission.csv', index=False)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Algorithm 30:&lt;/strong&gt; Jupyter Cell&lt;/p&gt;

&lt;p&gt;In this case the predicted values are saved as a file called &lt;strong&gt;submission.csv&lt;/strong&gt;. This file can then be uploaded on Kaggle, where you can see your test results in form of the root mean squared error.&lt;/p&gt;

&lt;p&gt;If this error is below 0.20, you have decent result for this competition. Below 0.15 is a solid result, and below 0.10 means you are really good. In our case we reach a score of about 0.14.&lt;/p&gt;

&lt;h2&gt;References&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Aurélien Géron (2019). &lt;em&gt;Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems&lt;/em&gt;. O'Reilly Media&lt;/li&gt;
  &lt;li&gt;Jeff Sutherland (2014). &lt;em&gt;Scrum: The Art of Doing Twice the Work in Half the Time&lt;/em&gt;. Crown Currency&lt;/li&gt;
&lt;/ul&gt;



</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>beginners</category>
      <category>kaggle</category>
    </item>
  </channel>
</rss>
