<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mahmoud Ahmed</title>
    <description>The latest articles on Forem by Mahmoud Ahmed (@mahmoudai).</description>
    <link>https://forem.com/mahmoudai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mahmoudai"/>
    <language>en</language>
    <item>
      <title>Comparative guide for SQL Subqueries vs CTEs vs Temp Tables vs Views vs Materialized Views in AWS Aurora</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Wed, 24 Sep 2025 17:52:38 +0000</pubDate>
      <link>https://forem.com/mahmoudai/comparative-guide-for-sql-subqueries-vs-ctes-vs-temp-tables-vs-views-vs-materialized-views-in-aws-2cj7</link>
      <guid>https://forem.com/mahmoudai/comparative-guide-for-sql-subqueries-vs-ctes-vs-temp-tables-vs-views-vs-materialized-views-in-aws-2cj7</guid>
      <description>&lt;p&gt;In modern data-driven applications, the efficiency and readability of SQL queries can affect performance, maintainability, and developer productivity. &lt;a href="https://aws.amazon.com/rds/aurora/" rel="noopener noreferrer"&gt;AWS Aurora&lt;/a&gt;, a fully managed relational database service compatible with MySQL and PostgreSQL, offers several techniques to manage query complexity and optimize performance through: Subqueries, Common table expressions, Temporary Tables, Views, and Materialized views.&lt;/p&gt;

&lt;p&gt;Each of these approaches serves different use cases, from simplifying nested logic to organizing reusable queries until persisting precomputed results.&lt;/p&gt;

&lt;p&gt;In this article, we will dive into definitions, comparisons, real-world use cases, and examples in AWS Aurora, helping you to distinguish between them and answer when and why to use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  1.Subquery
&lt;/h2&gt;

&lt;p&gt;It is simply a query nested inside another query, which runs from the inner query to the outer one during execution and produces temporary results.&lt;/p&gt;

&lt;p&gt;Used heavily in simple filtering or transformations, especially when the intermediate result is disposable.&lt;/p&gt;

&lt;p&gt;The example below clearly explains how to write it in Aurora:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- Find departments where the average salary exceeds 80,000

SELECT department_id, avg_salary

FROM (

    SELECT department_id, AVG(salary) AS avg_salary

    FROM employees

    GROUP BY department_id

) dept_avg

WHERE avg_salary &amp;gt; 80000;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As is known, for any nested logic, it can lead to performance issues if it is not optimized well, and by optimization, here we talk about complex joins or using it repeatedly. However, Aurora’s query optimizer can handle many cases efficiently, but excessive nesting can hurt performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Common Table Expression (CTE)
&lt;/h2&gt;

&lt;p&gt;Always, when redundant code the initial solution comes to mind is to “define it once, use it many”, and it’s exactly what the common table expressions do, it is a named temporary result set defined with the “WITH” clause, and as same as subqueries, both live during query execution.&lt;/p&gt;

&lt;p&gt;If we take the same example in a subquery and rewrite it again with CTEs, it will be like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- Find departments where the average salary exceeds 80,000 

WITH dept_avg AS (

    SELECT department_id, AVG(salary) AS avg_salary

    FROM employees

    GROUP BY department_id

)

SELECT d.department_id, d.avg_salary

FROM dept_avg d

WHERE d.avg_salary &amp;gt; 80000;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can observe, the CTE approach is more readable and organized and its value appears more with complex or recursive queries and while combining multiple result sets, making it highly recommended against subqueries that fit with simple queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  3.Temporary Table
&lt;/h2&gt;

&lt;p&gt;For jumping from execution level approaches to session level duration, the temporary tables are the ones to choose as it is an explicitly created table that persists only during session duration serving when the result set needs to be reused multiple times in the same session. It’s clearly improved performance when working with larger datasets repeatedly, but it requires more permissions than subqueries/CTEs, meaning it requires extra privilege to create tables or alter database/schema objects, unlike others that require only select permission.&lt;/p&gt;

&lt;p&gt;Here is an example to clarify how we can use this approach to calculate the same result as previous queries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- Temp table example (persisting for the whole session)

CREATE TEMP TABLE dept_avg_temp AS

SELECT department_id, AVG(salary) AS avg_salary

FROM employees

GROUP BY department_id;




-- You can run multiple queries against the temp table:

SELECT department_id, avg_salary

FROM dept_avg_temp

WHERE avg_salary &amp;gt; 80000;




-- (When done) optionally drop it

DROP TABLE IF EXISTS dept_avg_temp;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. View
&lt;/h2&gt;

&lt;p&gt;A “View” is a virtual table defined by a query. It simplifies the query but does not store data, can be considered as a blueprint to provide a layer of abstraction and encapsulation for reuse, besides simplifying maintenance and consistency, even if the performance is not improved.&lt;/p&gt;

&lt;p&gt;Below is an example of how to use “views” in Aurora:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- Create a view of department's average salary

CREATE VIEW dept_avg_view AS

SELECT department_id, AVG(salary) AS avg_salary

FROM employees

GROUP BY department_id;




-- Usage of them

SELECT *

FROM dept_avg_view

WHERE avg_salary &amp;gt; 80000;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Materialized View
&lt;/h2&gt;

&lt;p&gt;The “Materialized View” stores the results of query physically unlike standard view, and effectively used for precomputing expensive aggregations (e.g., reporting dashboards) which speeds up repeated queries on large datasets, this concept is supported in AWS Aurora PostgreSQL, but not Aurora MySQL, but it can be simulated by a table and scheduled refresh with permission in both for creation part. On the persistence side views mainly are permanent until dropped as definition (views/ materialized views), and refresh stored result data for materialized views, the refresh MV frequency is a controversial topic and heavily based on the use case, because high refresh rate can hurt performance with massive dataset and low rate can break the consistency of old snapshot data and being behind the recent result.&lt;/p&gt;

&lt;p&gt;Below is an example of how to write a “Materialized view” for our result&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- Create a materialized view 

CREATE MATERIALIZED VIEW dept_avg_mv AS

SELECT department_id, AVG(salary) AS avg_salary

FROM employees

GROUP BY department_id;




-- Query it like a normal table

SELECT *

FROM dept_avg_mv

WHERE avg_salary &amp;gt; 80000;




-- Refresh when data changes

REFRESH MATERIALIZED VIEW dept_avg_mv;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The table below summarizes the differences between these approaches:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technique&lt;/th&gt;
&lt;th&gt;Persistence&lt;/th&gt;
&lt;th&gt;Permissions&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Aurora Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Subqueries&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Execution only&lt;/td&gt;
&lt;td&gt;Fewer&lt;/td&gt;
&lt;td&gt;Simple one-off queries&lt;/td&gt;
&lt;td&gt;Can degrade performance if deeply nested&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CTEs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Execution only&lt;/td&gt;
&lt;td&gt;Fewer&lt;/td&gt;
&lt;td&gt;Readability, recursive queries&lt;/td&gt;
&lt;td&gt;Inline execution not materialized&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Temporary Tables&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Session duration&lt;/td&gt;
&lt;td&gt;More&lt;/td&gt;
&lt;td&gt;Reuse of intermediate results&lt;/td&gt;
&lt;td&gt;Useful for large intermediate sets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Views&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Permanent until dropped&lt;/td&gt;
&lt;td&gt;More&lt;/td&gt;
&lt;td&gt;Abstraction maintainability and query reuse&lt;/td&gt;
&lt;td&gt;No inherent performance gains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Materialized Views&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Permanent until dropped, stored results&lt;/td&gt;
&lt;td&gt;More&lt;/td&gt;
&lt;td&gt;Performance optimization via caching&lt;/td&gt;
&lt;td&gt;Aurora PostgreSQL only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Note: Here, we are using basic comparative examples to focus on demonstrating the differences between approaches on the same idea and results. I encourage you to go a step further and experiment with these methods using more complex queries in order to put this information to use and broaden the true significance of these elements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;In AWS Aurora, selecting between &lt;a href="https://docs.aws.amazon.com/redshift/latest/dg/r_Subquery_examples.html" rel="noopener noreferrer"&gt;Subqueries&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/redshift/latest/dg/r_WITH_clause.html" rel="noopener noreferrer"&gt;CTEs&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/chap-sql-server-aurora-mysql.sql.temporarytables.html" rel="noopener noreferrer"&gt;Temporary tables&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_VIEW.html" rel="noopener noreferrer"&gt;Views&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/what-is/materialized-view/" rel="noopener noreferrer"&gt;Materialized Views&lt;/a&gt; is all about balancing readability, reuse, and performance, based on business needs.&lt;/p&gt;

&lt;p&gt;The flow of selecting the best approach begins with Subqueries or CTEs for simplicity, like infrequent queries, but if the intermediate results are reused multiple times through the session, we can switch to Temporary tables. However, views are superior for maintainability and abstraction, but not speed. Finally, the materialized view caching solution is performance-critical for high-volume and computation reporting queries, which can be utilized effectively in Aurora PostgreSQL.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>aws</category>
      <category>aurora</category>
      <category>database</category>
    </item>
    <item>
      <title>ETL on AWS: Unlocking Data's Potential</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Mon, 21 Jul 2025 18:13:31 +0000</pubDate>
      <link>https://forem.com/mahmoudai/etl-on-aws-unlocking-datas-potential-1768</link>
      <guid>https://forem.com/mahmoudai/etl-on-aws-unlocking-datas-potential-1768</guid>
      <description>&lt;p&gt;Extracting data from multiple sources, transforming them into a unified analyzable format, then loading them to a specified place is termed ETL, or &lt;strong&gt;Extract&lt;/strong&gt;, &lt;strong&gt;Transform&lt;/strong&gt;, &lt;strong&gt;Load&lt;/strong&gt;, which is a very vital step in data management. To facilitate proper analysis and rightly decision-making, this activity plays a vital role in assuring data quality, consistency, and relevance.&lt;/p&gt;

&lt;p&gt;ETL is critical in the handling of disparity of data sources, formats, and structures. Through handling disparate data formats, null values, and inconsistencies, it effectively prepares the data for business intelligence, analytics, and reporting. ETL operations play a major role in the tuning of varied datasets against the backdrop of constantly changing business needs and in facilitating free flow of information from source to target.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the sources of data available in AWS, and where each is best used?
&lt;/h2&gt;

&lt;p&gt;There are numerous data related services on AWS, they are running with high reliable and market demand, below is the major sources which are being used by millions of users on daily basis:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;AWS Service&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon RDS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Relational Database Service&lt;/td&gt;
&lt;td&gt;Offers managed relational databases supporting engines such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.&lt;/td&gt;
&lt;td&gt;Defined for storing structured data and keeping traditional relational databases.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;NoSQL Database&lt;/td&gt;
&lt;td&gt;Fully managed NoSQL database service with high availability and optimized for high-performance and scalable applications.&lt;/td&gt;
&lt;td&gt;Suitable for dynamic and unstructured data applications with fast and transparent access to large amounts of data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Simple Storage Service&lt;/td&gt;
&lt;td&gt;Scalable object storage designed to store and serve any amount of data, anywhere on the globe, at any time.&lt;/td&gt;
&lt;td&gt;Usually employed as a data lake, with structured and unstructured data in high volumes, available to use for analytics and other processing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon Kinesis&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Data Streaming&lt;/td&gt;
&lt;td&gt;Service for real-time data streaming and analytics to process streaming data at scale.&lt;/td&gt;
&lt;td&gt;Mainly used for real-time analytics use cases, including clickstream analysis, IoT data processing, and logging.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon Redshift&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Data Warehouse&lt;/td&gt;
&lt;td&gt;Fully managed data warehouse offering, designed for high-performance SQL query analysis.&lt;/td&gt;
&lt;td&gt;Suitable for data warehousing, allowing for fast, efficient analysis of large data sets.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Serverless Computing&lt;/td&gt;
&lt;td&gt;Serverless compute service to allow code execution as a result of events.&lt;/td&gt;
&lt;td&gt;Generally used within data pipeline ETL to process and transform data as a result of events that trigger based on changes in data sources.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This allows customers to design end-to-end data solutions using varied storage and processing capacity as a function of particular requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do AWS ETL processes integrate data across these diverse sources?
&lt;/h2&gt;

&lt;p&gt;AWS ETL processes would integrate data across diverse sources with a mix of services and capabilities. The below is an illustration of how ETL processes can be used to bring data from multiple sources together, and other than AWS services, ETL processes can also efficiently bring data from divergent sources, such as relational databases, NoSQL storage, streaming data, or outside systems. Modularity and scalability of AWS services allow companies to create dynamic ETL pipelines that cope with the complications of their multi-dimensional data landscapes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70e6mjs5wbcxe3kdo6bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70e6mjs5wbcxe3kdo6bx.png" alt="ETL Chronographing in AWS" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How AWS services integrate well with each other to serve the purpose of an end-to-end ETL pipeline?
&lt;/h2&gt;

&lt;p&gt;AWS services integrate well with one another and have a solid platform to create end-to-end ETL (Extract, Transform, Load) pipelines. The arrangement of these services makes it a cost-effective and well-integrated process to manage various data sources.&lt;/p&gt;

&lt;p&gt;Below is a sample of unified usage of these AWS services, organizations can build responsive and scalable ETL pipelines that are capable of handling the complexity of today's secured data environments, providing an integrated and networked environment for data processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzvzz3v99ycqvw3gr9wf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzvzz3v99ycqvw3gr9wf.png" alt="scalable ETL pipeline in AWS" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>etl</category>
      <category>data</category>
    </item>
    <item>
      <title>My AWS Journey Just Leveled Up!</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Tue, 22 Apr 2025 21:48:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/my-aws-journey-just-leveled-up-552k</link>
      <guid>https://forem.com/aws-builders/my-aws-journey-just-leveled-up-552k</guid>
      <description>&lt;p&gt;Hello, this is Mahmoud Certified AWS Machine Learning Specialist and I'm so happy to announce that I've been selected as an AWS Community Builder in the Networking and Content Delivery category! This is an amazing opportunity, and I'm really looking forward to diving deeper into the world of AWS, learning from the best, and contributing back to this incredible community.&lt;/p&gt;

&lt;p&gt;For those unfamiliar with the AWS Community Builder Program and its offerings, you can find more details &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. In a line, This community is designed for developers and enthusiasts to learn, connect, and collaborate on all things AWS.&lt;/p&gt;

&lt;p&gt;A big thank you to Jason Dunn for this opportunity. I'm also eager to connect with Corey Strausman, Thembile Ndlovu, and all the other brilliant minds in the AWS Community Builders program. Let's build it together!&lt;/p&gt;

&lt;p&gt;🎁 And the excitement doesn't stop there! My AWS Community Builder Swag Kit has arrived!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hm6nlq1zm827mk3edhr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hm6nlq1zm827mk3edhr.PNG" alt="Swag Kit" width="578" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A huge thank you to Jason Dunn, and everyone who makes this community so awesome. I'm so ready for this journey ahead and can't wait to learn and grow.&lt;/p&gt;

&lt;p&gt;Let's build it together!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>awscommunitybuilders</category>
    </item>
    <item>
      <title>Step-by-Step Guide to Upgrading an Existing AWS EKS Cluster</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Tue, 10 Sep 2024 17:50:01 +0000</pubDate>
      <link>https://forem.com/aws-builders/step-by-step-guide-to-upgrading-an-existing-aws-eks-cluster-578h</link>
      <guid>https://forem.com/aws-builders/step-by-step-guide-to-upgrading-an-existing-aws-eks-cluster-578h</guid>
      <description>&lt;p&gt;Upgrading your Amazon Elastic Kubernetes Service (EKS) cluster is crucial, but why is it necessary? Why not leave it as it is if it’s already functioning?”&lt;br&gt;
To answer these questions you need to know some points about Kubernetes and AWS EKS in general:&lt;/p&gt;

&lt;p&gt;First, Kubernetes is a rapidly evolving open-source project with periodic releases and adopt concept of regular upgrades. They have created a versioning and releasing process that follows a quarterly release cycle, with a cadence ranging between 2 to 5 months.&lt;/p&gt;

&lt;p&gt;Second, The Pricing, any running software prefer to use minimal resources that costs an affordable amount of money, but what if I told you this: "your running software will cost you x6 times the current cost starting from next month", I'm sure you won't be happy with that especially if you have high scale solutions.&lt;/p&gt;

&lt;p&gt;AWS introduced &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html" rel="noopener noreferrer"&gt;two types of support&lt;/a&gt; for EKS :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1- Standard support&lt;/strong&gt;: begins when a version becomes available in Amazon EKS and continues for 14 months — the same as the upstream Kubernetes support window for minor versions, which will cost you $0.10 per cluster per hour.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2- Extended support&lt;/strong&gt;: in Amazon, EKS begins immediately at the end of standard support and continues for 12 months but with $0.60 per cluster per hour, and if you don't upgrade the EKS cluster until end of extended support AWS will force upgrade the cluster to next version of extended support, which is not recommended because it's can produce  incompatibility or downtime for your cluster.&lt;/p&gt;

&lt;p&gt;So it's always recommended that you update it yourself and stay in the standard support plan to ensure you are taking advantage of the latest features, security fixes, and low pricing plan. However, it is a process that must be handled carefully to avoid any downtime.&lt;/p&gt;

&lt;p&gt;In this article, we will walk you through safely upgrading your EKS cluster using the command line.&lt;/p&gt;
&lt;h2&gt;
  
  
  Before You Begin
&lt;/h2&gt;

&lt;p&gt;Before diving into the upgrade process, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI installed and configured with appropriate permissions, you can check &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; configured to interact with your EKS cluster, here's is &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html" rel="noopener noreferrer"&gt;how to configure it&lt;/a&gt;?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 1: Assess Your Current Environment
&lt;/h2&gt;

&lt;p&gt;Begin by assessing your current cluster version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks describe-cluster &lt;span class="nt"&gt;--name&lt;/span&gt; your-cluster-name &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"cluster.version"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, list the available upgrade versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks list-updates &lt;span class="nt"&gt;--name&lt;/span&gt; your-cluster-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Plan Your Upgrade
&lt;/h2&gt;

&lt;p&gt;Choose an appropriate version to upgrade to, It's recommended to review the release notes for the target version to understand any changes or necessary actions before and after the upgrade.&lt;/p&gt;

&lt;p&gt;Be aware of the upgrading plan:&lt;br&gt;
The Upgrading comes to many points in the cluster (Control plane, Data plane, Add-ons, policies, and other related components like a load balancer, ... etc)&lt;br&gt;
And can be concluded into some points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand Deprecation Policies.&lt;/li&gt;
&lt;li&gt;Review Kubernetes upgrade insights provided by AWS EKS, to understand any possible impact on your cluster, such as breaking changes that may affect your workloads.&lt;/li&gt;
&lt;li&gt;Check Cluster Add-Ons Compatibility, understand the compatibility of any existing cluster add-ons with the cluster version you intend to upgrade to, as EKS doesn’t automatically upgrade add-ons when new versions are released or after a cluster update.&lt;/li&gt;
&lt;li&gt;Validate that your services still work, nodes are in active status and pulled images correctly, endpoints are routed to the right resources, and any other components that don't upgrade automatically with the cluster are compatible and running.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 3: Upgrade the AWS EKS Cluster
&lt;/h2&gt;

&lt;p&gt;Now, initiate the upgrade:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-cluster-version &lt;span class="nt"&gt;--name&lt;/span&gt; your-cluster-name &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; new-version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Monitor the update status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks describe-cluster &lt;span class="nt"&gt;--name&lt;/span&gt; the-cluster-name &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"cluster.status"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Update Your Node Groups
&lt;/h2&gt;

&lt;p&gt;If your cluster uses managed node groups, they will also require updating:&lt;/p&gt;

&lt;p&gt;List your node groups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks list-nodegroups &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; the-cluster-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the node group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-nodegroup-version &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; the-cluster-name &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; the-nodegroup-name &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; new-version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Upgrade Add-Ons
&lt;/h2&gt;

&lt;p&gt;For those using AWS EKS add-ons, ensure they are also updated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-addon &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; the-cluster-name &lt;span class="nt"&gt;--addon-name&lt;/span&gt; the-addon-name &lt;span class="nt"&gt;--addon-version&lt;/span&gt; the-new-addon-version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Verify system functionality
&lt;/h2&gt;

&lt;p&gt;After the upgrade, it’s essential to check the cluster’s health and functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;kubectl&lt;/code&gt; to check the pods and deployments status.&lt;/li&gt;
&lt;li&gt;Verify that all services are operating smoothly and test essential functionalities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 7: Update Your Kubeconfig
&lt;/h2&gt;

&lt;p&gt;Lastly, update your kubeconfig file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; the-cluster-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Now you have an initial insight about upgrading your AWS EKS cluster which is a direct process when done carefully. Always follow best practices, such as backing up your data and carefully reading version release notes. And don't forget to re-deploy a service that may surface a failure and require you to update your objects/charts to support the new version.&lt;br&gt;
Regular upgrades are essential for security, performance, accessing new features, and saving your budget, making this knowledge an invaluable part of managing Kubernetes infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>When to use (SELCET * FROM CTE) or just LEFT JOIN CTE?</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Fri, 14 Apr 2023 11:54:13 +0000</pubDate>
      <link>https://forem.com/mahmoudai/when-to-use-selcet-from-cte-or-just-left-join-cte-1e0f</link>
      <guid>https://forem.com/mahmoudai/when-to-use-selcet-from-cte-or-just-left-join-cte-1e0f</guid>
      <description>&lt;p&gt;Whether to use&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;SELECT * FROM CTE &lt;br&gt;
or &lt;br&gt;
LEFT JOIN CTE&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 directly depends on the specific requirements of your query and the structure of your &lt;a href="https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL#Common_table_expression" rel="noopener noreferrer"&gt;Common Table Expression (CTE)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If your CTE returns a single result set that can be easily joined to other tables in your query, then using &lt;a href="https://en.wikipedia.org/wiki/Join_(SQL)#Left_outer_join" rel="noopener noreferrer"&gt;LEFT JOIN&lt;/a&gt; CTE directly can be more efficient and easier to read.&lt;/p&gt;

&lt;p&gt;However, if your CTE includes multiple queries or subqueries that need to be combined before being joined to other tables, or if you need to apply further filtering or sorting to the CTE results before joining to other tables, then using SELECT * FROM CTE can provide more flexibility.&lt;/p&gt;

&lt;p&gt;Additionally, keep in mind that when using LEFT JOIN CTE directly, the CTE will be executed for each row of the left-hand table in the join. This can cause performance issues if the CTE is complex or returns a large number of rows.&lt;/p&gt;

&lt;p&gt;Let's take an example&lt;/p&gt;

&lt;p&gt;Suppose we have a table called orders that contains information about customer orders, and we want to use a CTE to calculate the total revenue generated by each customer. The CTE might look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH customer_revenue AS (
  SELECT customer_id, SUM(price * quantity) AS total_revenue
  FROM orders
  GROUP BY customer_id
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, suppose we want to join the results of this CTE with another table called customers that contains information about each customer. We could do this using either SELECT * FROM customer_revenue or LEFT JOIN customer_revenue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Option 1: SELECT * FROM CTE
SELECT *
FROM customers
JOIN (SELECT * FROM customer_revenue) AS revenue
  ON customers.customer_id = revenue.customer_id;

-- Option 2: LEFT JOIN CTE
SELECT *
FROM customers
LEFT JOIN customer_revenue
  ON customers.customer_id = customer_revenue.customer_id;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, both options will produce the same result. However, there are some important differences to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Option 1 (SELECT * FROM CTE) allows us to apply further filtering or sorting to the CTE results before joining to other tables. For example, we could add a WHERE clause to the inner query to only include customers who have generated more than $100 in revenue:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT *
FROM customers
JOIN (SELECT * FROM customer_revenue WHERE total_revenue &amp;gt; 100) AS revenue
  ON customers.customer_id = revenue.customer_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Option 2 (LEFT JOIN CTE) can be more efficient if the CTE is complex or returns a large number of rows. This is because the CTE is only executed once and then joined to the other tables, rather than being executed once for each row in the join.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, both options are valid and can be used depending on the specific requirements of your query and the structure of your CTE. It is recommended to test both approaches and choose the one that provides the best performance and readability for your use case.&lt;/p&gt;

</description>
      <category>sql</category>
    </item>
    <item>
      <title>How I passed AWS Machine Learning Specialty Exam?</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Mon, 13 Sep 2021 12:20:45 +0000</pubDate>
      <link>https://forem.com/mahmoudai/how-i-passed-aws-machine-learning-specialty-exam-1jhg</link>
      <guid>https://forem.com/mahmoudai/how-i-passed-aws-machine-learning-specialty-exam-1jhg</guid>
      <description>&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;My experience grew in the area of designing ML-based systems from experimenting with ideas to publishing working systems, but I wanted to become the kind of engineer who works on powerful scalable platforms like AWS, and from this point I focused to deep dive into AWS services especially those cooperating with machine learning system lifecycle.&lt;/p&gt;

&lt;p&gt;While I was doing my research I encountered a scholarship program offered by MCIT in Egypt, that aims to train 500 software engineers on AWS Certified Machine Learning Specialty to be certified specialists. I applied for it and after some selection phases I got the opportunity to be a Certified AWS ML Specialist and the challenge spark was ignited.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the exam about?
&lt;/h2&gt;

&lt;p&gt;AWS offers many types of certificates for all career levels as families and one of this them is the "specialty" family which is my machine learning specialty certificate belongs to, and its goal is below according to the &lt;a href="https://aws.amazon.com/certification/certified-machine-learning-specialty/?ch=sec&amp;amp;sec=rmg&amp;amp;d=2" rel="noopener noreferrer"&gt;AWS certification website&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;"&lt;em&gt;The AWS Certified Machine Learning - Specialty certification is intended for individuals who perform a development or data science role. It validates a candidate's ability to design, implement, deploy, and maintain machine learning (ML) solutions for given business problems.&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;The exam certificate is valid for 3 years and to pass it must pass 75% of the exam 65 questions, which cover many areas in cloud architecture patterns, development, and services distributed across these domains:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hmyterbqe9xv9sp58bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hmyterbqe9xv9sp58bm.png" alt="Exam modules" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this exam is different than other AWS exams?
&lt;/h2&gt;

&lt;p&gt;This type of exam is different because it depends on many aspects represented in the chart below, and without one of them you will have to put in more effort to fill this gap.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferfdgbont483h78gf51c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferfdgbont483h78gf51c.png" alt="ML exam focus" width="651" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ML Concepts&lt;/strong&gt; is the science behind models or basic statistical approaches which are required to be aware of before doing the exam.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ML Experience&lt;/strong&gt; is a hand-held experience in building systems and solving problems related to the machine learning life cycle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Experience&lt;/strong&gt; is the experience that comes from working with AWS services whether there are ML-related services or general services that work in various domains, this type of experience will help a lot in moving the mindset towards cloud and AWS especially.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are the resources I used?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.aws.training/Details/eLearning?id=42183" rel="noopener noreferrer"&gt;Exam Readiness AWS Certified Machine Learning - Specialty&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.udemy.com/course/aws-machine-learning/" rel="noopener noreferrer"&gt;"AWS Certified Machine Learning Specialty 2021 - Hands On!" by Frank Kane&lt;/a&gt; on Udemy (they promote a lot of discounts btw).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gitlab.com/juliensimon/awsmlmap" rel="noopener noreferrer"&gt;AWS ML services mind map&lt;/a&gt; by Julien Simon.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.examtopics.com/exams/amazon/aws-certified-machine-learning-specialty/" rel="noopener noreferrer"&gt;Exam topics dump questions&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/" rel="noopener noreferrer"&gt;AWS documentations&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/playlist?list=PLhr1KZpdzukcOr_6j_zmSrvYnLUtgqsZz" rel="noopener noreferrer"&gt;"Amazon SageMaker Technical Deep Dive Series"&lt;/a&gt; by Emily Webber on YouTube.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.packtpub.com/product/aws-certified-machine-learning-specialty-mls-c01-certification-guide/9781800569003" rel="noopener noreferrer"&gt;"AWS Certified Machine Learning Specialty: MLS-C01 Certification Guide"&lt;/a&gt; Book By Somanath Nanda , Weslley Moura.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What about my studying strategies?
&lt;/h2&gt;

&lt;p&gt;Using the resources above requires a good plan to maximize the final outcome and this strategy works well for me:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxnaen3r9u0l17uy5hud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxnaen3r9u0l17uy5hud.png" alt="my personal strategy" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step #1&lt;/strong&gt;: Study the AWS Exam Readiness by watching every domain video and doing the related quiz, then solve the "Additional Study Questions" module which is around 40 questions covers all domains, and gets an initial view on weak spots, I recommend keeping this score as a baseline score.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step #2&lt;/strong&gt;: Watching the Udemy course, taking notes, and doing practice labs will give you experience with concepts and become more familiar with AWS services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step #3&lt;/strong&gt;: Reviewing the mind map helped a lot in connecting points together, organizing your memory, and answering some questions I had from the previous steps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step #4 &amp;amp; #5&lt;/strong&gt; : Solving the dump questions was a very important step (a lot of exam questions are very similar to these types of questions), but the power of exam topics questions not in question itself but in the discussions under it, they have a lot of references to AWS documentation and answer points of view, reading these links will open your mind towards areas not viewed before, and knowing how others answer the questions will help you in defining keywords in the question that makes the difference in selecting the right answer. You can answer these questions multiple times but avoid memorizing them, but instead justifying the answer you selected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step #6&lt;/strong&gt;: Watching Sage Maker Deep Dive Series helps me to focus on the sage maker as a platform and what, when, and how to use its services, which is an important part of my exam preparation. But I'm not dedicated to it but just watching one or two videos besides going into Step #7.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step #7&lt;/strong&gt;: Reading the AWS MLS Guide Book, has been a great experience because it's designed to be a summary with GitHub code snippets and questions after every chapter, which is to revise your concepts and make sure you are now familiar with every topic in it, without going into too many details. I highly recommend taking notes that would be great for revision on the day before an exam.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step #8&lt;/strong&gt; (optional): resolve the "Additional Study Questions" module in the exam readiness again, and compare it with the baseline score you get in Step #1. My score jumps from 60% to 94% and was an amazing feeling gives me more confidence in my skills.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzphjsu5168eb90hcoxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzphjsu5168eb90hcoxe.png" width="797" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How was the exam experience?
&lt;/h2&gt;

&lt;p&gt;I registered for my exam after preparing the studying plan and estimated time I need to be ready for the exam, or I thought I will be ready but actually being ready 100% is just a dream and if I tried to follow the ready state will reschedule the exam date one or two times, so my tip is preparing plan, estimating time, adapt with current deadline.&lt;br&gt;
The day before the exam I used a summary for the points and modules I would need to review on the fly, like important formulas, and mind maps, because it would take a lot of time to revise every single point in the content.&lt;br&gt;
I registered for my exam at the &lt;a href="https://www.globalknowledge.com/" rel="noopener noreferrer"&gt;Global Knowledge&lt;/a&gt; testing center which is a certified center for &lt;a href="https://home.pearsonvue.com/" rel="noopener noreferrer"&gt;Pearson VUE&lt;/a&gt;, that I preferred over an online one, and to avoid taking the risk of a dropped connection or other unexpected issues plus the &lt;a href="https://home.pearsonvue.com/Test-takers/OnVUE-online-proctoring.aspx" rel="noopener noreferrer"&gt;pre-exam setup&lt;/a&gt; headache. The experience was great and exactly as  I expected in the &lt;a href="https://vimeo.com/482759138" rel="noopener noreferrer"&gt;"What to expect when testing with Pearson VUE?"&lt;/a&gt; video. But for me the hardest thing I faced was sitting for 3 hours continuously, while I wasn't used to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the benefits I got after passing the exam?
&lt;/h2&gt;

&lt;p&gt;Besides the value of the certificate, promoting career, and opening new job positions, AWS offers other great benefits such as:&lt;br&gt;
Supporting for next challenge by having free practice exam voucher as well as a 50% discount on the next exam valid for next 3 years.&lt;br&gt;
Engaging with the AWS Certified Community on &lt;a href="https://www.linkedin.com/groups/6814264/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, which helps in making new connections, interacting with peers, and learning from others who have validated their technical skills with AWS certifications.&lt;br&gt;
Becoming a Subject Matter Expert by applying to the &lt;a href="https://aws.amazon.com/certification/certification-sme-program/" rel="noopener noreferrer"&gt;AWS SME program&lt;/a&gt; to help in deciding exam topics, developing questions, and determining passing scores.&lt;/p&gt;

&lt;p&gt;So being certified with AWS, not just an exam and a certificate but it's a package and the relationship that started and will continue for a long time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This journey was amazing and the experience and the value I got opened my horizons even more. I hope you've enjoyed reading this post and maybe even feel inspired to get AWS certified yourself (let me know if you are). It's not too late to make this one of your goals of the year!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>certifications</category>
      <category>exam</category>
    </item>
    <item>
      <title>Python Access Modifiers</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Sun, 01 Nov 2020 17:05:44 +0000</pubDate>
      <link>https://forem.com/mahmoudai/python-access-modifiers-3aoe</link>
      <guid>https://forem.com/mahmoudai/python-access-modifiers-3aoe</guid>
      <description>&lt;p&gt;We can impose access restrictions on different data members and member functions. The restrictions are specified via access modifiers. Access modifiers are levels of hidden information hiding specified for each data member or method defined in the class.&lt;/p&gt;

&lt;p&gt;There are 3 types of access modifiers in python:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public&lt;/li&gt;
&lt;li&gt;Protected&lt;/li&gt;
&lt;li&gt;Private&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Public
&lt;/h2&gt;

&lt;p&gt;Public attributes are those that be can be accessed inside the class and outside the class.&lt;br&gt;
In python, all the attributes are public by default, if you want to change the access level you have to set it to this attribute.&lt;/p&gt;

&lt;p&gt;Below is an example of implementing a public access modifier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Vehicle:

    def __init__(self, name, speed):  
         self.name = name #public data member
         self.speed=speed #public data member

    def display_name(self):#public method
      print("name:", self.name)


my_vechile = Vehicle("USS Nautilus", 3000)
my_vechile.display_name()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Output
name: USS Nautilus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, the properties name and speed, and the method &lt;em&gt;display_name&lt;/em&gt; are public as they can be accessed in the class as well as outside the class.&lt;/p&gt;

&lt;h1&gt;
  
  
  Protected
&lt;/h1&gt;

&lt;p&gt;Protected attributes can be accessed through the class they declared in, or its subclasses (derived classes) only, and can be defined by adding a single underscore _ before the class attribute.&lt;/p&gt;

&lt;p&gt;Below is a simple implementation for inheritance with protected attributes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Vehicle:  #Parent class

    def __init__(self, name, speed):  
         self._name = name #protected data member
         self._speed=speed #protected data member

    def _display_name(self): #protected data member
      print("name:", self._name)

class Car(Vehicle):  #Child class

      def __init__(self, name, speed, year):
          super().__init__(name, speed) #used super to point to parent class
          self.year=year #public data member

      def display_details(self): #protected method
          self._display_name()
          print("speed: ", self._speed)
          print("year:", self.year)


my_car = Car("GMC Hummer EV", 300, 2020)
my_car.display_details()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Output
name: GMC Hummer EV
speed:  300
year: 2020
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code we have a Car class which inherits from the &lt;em&gt;Vehicle&lt;/em&gt; class, and as we see the vehicle protected members can be accessed easily from the derived class &lt;em&gt;Car&lt;/em&gt; like its own member &lt;em&gt;year&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;However, python doesn't fully perform the functionality of the protected modifier. The attribute defined in the above program is accessible outside the class scope. It can also be modified as well.&lt;/p&gt;

&lt;h1&gt;
  
  
  Private
&lt;/h1&gt;

&lt;p&gt;Private attributes cannot be accessed directly from outside the class but can be accessed from inside the class.&lt;/p&gt;

&lt;p&gt;The aim at a private level is to keep data hidden from the users and other classes and can be done by adding double underscore __ before attribute name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Vehicle:

   def __init__(self, name, speed):
       self.__name = name  #private data member
       self.__speed = speed  #private data member

   def __display_name(self): #private method
       print("name:", self.__name)

   def __display_speed(self):  #private method
       print("speed:", self.__speed)

   def display_details(self): #public method
       self.__display_name()
       self.__display_speed()


my_vehicle = Vehicle("USS Nautilus", 3000)
my_vehicle.display_details()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Output
name: USS Nautilus
speed: 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code we declared the same &lt;em&gt;Vehicle&lt;/em&gt; class but with private data members then, tried to access them from inside and outside the class, and found it succeeded when we accessed the name in &lt;em&gt;display_name&lt;/em&gt; method, but it failed when we accessed the &lt;em&gt;speed&lt;/em&gt; from outside the class.&lt;/p&gt;

</description>
      <category>python</category>
      <category>oop</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Types of Machine Learning Systems From Other Perspectives</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Tue, 01 Oct 2019 14:29:17 +0000</pubDate>
      <link>https://forem.com/devmahmoud10/types-of-machine-learning-systems-from-other-perspectives-159b</link>
      <guid>https://forem.com/devmahmoud10/types-of-machine-learning-systems-from-other-perspectives-159b</guid>
      <description>&lt;p&gt;Many people classified the machine learning system into supervised learning and unsupervised learning, but the truth, the machine learning systems can be categorized into many categories and the criteria of categorizing them not exclusive in one way.&lt;br&gt;
Some of the factors which the systems classified based on it are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Human Supervision (Supervised, Unsupervised, Semisupervised, and Reinforcement Learning).&lt;/li&gt;
&lt;li&gt;Incrementally Learning (Online and Batch learning).&lt;/li&gt;
&lt;li&gt;Making Prediction (Instance-Based and Model-Based Learning).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Jx4HCNF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/j6lxychn9bcerefqz5cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Jx4HCNF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/j6lxychn9bcerefqz5cb.png" alt="Machine learning Systems"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's look deeper into each of these criteria …&lt;/p&gt;

&lt;h2&gt;
  
  
  Human Supervision
&lt;/h2&gt;

&lt;p&gt;Machine learning systems can be classified based on the type of supervision into four major categories:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ywTwCaQX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/h41gmghuftpc03jg6t3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ywTwCaQX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/h41gmghuftpc03jg6t3l.png" alt="Human Supervision ML Systems"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Supervised Learning
&lt;/h3&gt;

&lt;p&gt;In supervised learning, the training data introduced to the system have to include the label of each sample. For example in a spam classifier, the algorithm needs each message with its label, to know if this message is spam or not to learn how to classify new messages, and this task called classification.&lt;br&gt;
And in another example with house prices prediction, the algorithm needs every house specifications, the label of the house and this case will be the price, to make the system able to predict the new house price and this task called Regression.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N0az0nXG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/44611i9kgl49ylj1c2i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N0az0nXG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/44611i9kgl49ylj1c2i0.png" alt="Supervised learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Unsupervised Learning
&lt;/h3&gt;

&lt;p&gt;In unsupervised learning, the training data fed to the system haven't any label describes the sample belongs to which class, and for that it's must rely on itself to determine patterns of samples, which reduces the accuracy of the algorithm, However, it is the most important of these categories because most the data unlabeled and need to find patterns between them.&lt;br&gt;
Some of the most used tasks in unsupervised learning are Clustering (split the dataset into groups, based on similar patterns between data points), Anomaly detection (discover unusual data points in the dataset and used it, is useful for finding fraudulent transactions, and finding outliers in data preprocessing phase) and Dimensionality reduction (also used in data preprocessing phase to reduce the number of features in the dataset).&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2zWdb6DX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wbzufn2goqpjqkp0179a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2zWdb6DX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wbzufn2goqpjqkp0179a.png" alt="Unsupervised learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Semi-supervised Learning
&lt;/h3&gt;

&lt;p&gt;In supervised learning, the mix of labeled and unlabeled data make the model combines between the supervised and unsupervised techniques, for example by using labeled data in model training, and then use the trained model to classify the unlabeled data, and then feed all the high probability predicted  data to re-train the model, with a larger amount of labeled data, this technique called &lt;a href="https://www.analyticsvidhya.com/blog/2017/09/pseudo-labelling-semi-supervised-learning-technique/"&gt;pseudo-labeling.&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_jhHKtMb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/09krdj8ucgvsrmlkgv0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_jhHKtMb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/09krdj8ucgvsrmlkgv0y.png" alt="Semi-supervised Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Reinforcement Learning
&lt;/h3&gt;

&lt;p&gt;In reinforcement learning, the system is very different the learning system called agent can observe the environment and learn from it, by performing actions then get a reward for a good action or penalty for a bad action, this strategy schema called policy which defines what action the agent should choose when it is in a given situation.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GZpEvKHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/j6uwg3rg1vywdhm7cb35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GZpEvKHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/j6uwg3rg1vywdhm7cb35.png" alt="Reinforcement Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Incrementally Learning
&lt;/h2&gt;

&lt;p&gt;Another criterion to classify machine learning systems is based on the ability to learn incrementally from a stream of coming data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Batch Learning
&lt;/h3&gt;

&lt;p&gt;This type of systems need to train on all of the available data and take more time and computation resources and calling offline learning because it learns offline and then launched into production and runs without learning, after that when we need to train new version of the system, we have to retrain the model from scratch away from production version on all data (old &amp;amp; new) then replace the trained model with the new one.&lt;br&gt;
The simple example on this type of systems, when you try to build cat vs not cat image classifier trained on cats images on the day and after some time we need to introduce the night images, then we need to take all night and day cat images and retrain our model then deploy the new model in new release.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SALJe7wC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/p9paq9j9nvf7f9t59i3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SALJe7wC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/p9paq9j9nvf7f9t59i3b.png" alt="Batch Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Online Learning
&lt;/h3&gt;

&lt;p&gt;The idea of online learning is to train the system incrementally by feeding it data points sequentially, or by a small group called mini-batches, online learning is great for systems that receive data as continues flow and need to adapt quickly with new changes, and don't care too much with a long history, and the most known example of this systems is "Stocks prices prediction model" which learns on the fly to adapt with recent changes in stocks prices with less importance to too old reading.&lt;br&gt;
But one of the big challenges in this type of systems is the bad quality data fed to the system, the system's performance will gradually decline, some examples of the bad data are wrong readings in stock prices or someone spamming a search engine in a try to rank high in search results, and to reduce this risk we need to add monitoring layer like anomaly detection algorithm able to detect any abnormal data can affect model performance, and switch learning off until the quality of data improving.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r5m1RSW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/x2c5oohm59th56lss3wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r5m1RSW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/x2c5oohm59th56lss3wl.png" alt="Online Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Making Prediction
&lt;/h2&gt;

&lt;p&gt;The last factor that can separate machine learning systems into two types is how the model generalizes to making predictions for new data has never seen before.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instance-based Learning
&lt;/h3&gt;

&lt;p&gt;To be able to classify a new data point the algorithm calculates the distance between the new data point, and all the points in your data set and predict the class in which the new data point belongs.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cZAU-CU1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/q4ijwt8utkidf05fm992.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cZAU-CU1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/q4ijwt8utkidf05fm992.png" alt="Instance Based Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Model-based Learning
&lt;/h3&gt;

&lt;p&gt;Another way to generalize from a set of examples is to build a model of these examples, then use that model to make predictions. This is called model-based learning, and build the model here means some equation has parameters to tune alternate the instance-based learning which depends on similarity as the main choice.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NDLPgf9c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/myn9kgbrduj0ega6csk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NDLPgf9c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/myn9kgbrduj0ega6csk2.png" alt="Model based Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we look for the house prices prediction problem and try to fit it in these types we can find two scenarios for learning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;1st scenario: take the unknown house that we need to find its price, and measure the similarity between house features, and all labeled data points, then set the new house price based on the most similar house in our data, or average of the most similar k houses, that called model-based learning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;2nd scenario: by building equation from features and use an algorithm to tune these parameters the can fit, and generalize the house prices data, then get the feature values, and substitute with it in the equation to get the new house price.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, all of these most common categories is a try to look for machine learning systems from multiple perspectives.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Python Programming Styles</title>
      <dc:creator>Mahmoud Ahmed</dc:creator>
      <pubDate>Fri, 08 Mar 2019 21:39:05 +0000</pubDate>
      <link>https://forem.com/mahmoudai/python-programming-styles-4o4m</link>
      <guid>https://forem.com/mahmoudai/python-programming-styles-4o4m</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0wh0ffy5o1irdlfxxtu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0wh0ffy5o1irdlfxxtu.jpg" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While reading an article about a new version of PyTorch, my eyes stop on sentence&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Intrinsically, there are two main characteristics of PyTorch that distinguish it         from other deep learning frameworks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Imperative Programming&lt;/li&gt;
&lt;li&gt;Dynamic Computation Graphing&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;and want to explore more about Imperative programming and find it so pretty and clear but what about other programming styles? and can code them in python?&lt;br&gt;
from here I started my search journey about &lt;strong&gt;python programming styles&lt;/strong&gt; then share it here.&lt;/p&gt;

&lt;p&gt;Initially, Python is a great language not for coding simplicity only but because it differs about other languages in limitations of coding styles, most of the other languages use just one coding style like which reduce flexibility for the programmer but in python, the programmer can many coding styles to achieve different effects.&lt;/p&gt;

&lt;p&gt;Here's an overview of Python coding styles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Procedural&lt;/strong&gt;&lt;br&gt;
    tasks proceed a step at a time and you can imagine the program sequential flow, it mostly used for sequencing, selection, and modularization. &lt;a href="https://www.101computing.net/sequencing-selection-iteration/" rel="noopener noreferrer"&gt;more reading about them&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_total(my_list):
    total=0
    for x in my_list:
        total+=x
    return total

print(get_total([1,2,3,4,5]))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.Functional&lt;/strong&gt;&lt;br&gt;
    it's like a math equation and any forms of state or mutable data are avoided, some developers say it as part of the procedural style, it supports solution-focused thinking (not what to &lt;em&gt;do&lt;/em&gt; but what to &lt;em&gt;accomplish&lt;/em&gt;). Therefore the academics teach functional languages as first programming languages and data scientists prefer it for recursion and lambda.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from functools import reduce

def sum_nums(a, b):
    return a + b

print(reduce(lambda a, b: sum_nums(a,b), [1, 2, 3, 4, 5]))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.Imperative&lt;/strong&gt;&lt;br&gt;
    computations are performed as a direct to change in the program state, it produces simple code and very useful in data structures manipulation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;total = 0
for x in my_list:
    total += x
print(total)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4.Object-oriented&lt;/strong&gt;&lt;br&gt;
    it simplifies the code by using objects to model the real world and gives the programmer ability to reuse code and encapsulation allows to treat code as a black box, and inheritance make it easier to expand the functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class list_opertaions(object):
    def __init__(self, my_list):
        self.my_list = my_list
        self.total = 0

    def get_total(self):
      self.total = sum(self.my_list)

obj = list_opertaions(my_list)
obj.get_total()
print(obj.total)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>python</category>
      <category>coding</category>
      <category>programming</category>
      <category>styles</category>
    </item>
  </channel>
</rss>
