<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Umukoro Emmanuel</title>
    <description>The latest articles on Forem by Umukoro Emmanuel (@iemmanuel104).</description>
    <link>https://forem.com/iemmanuel104</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/iemmanuel104"/>
    <language>en</language>
    <item>
      <title>BIG O NOTATION IN DATA STRUCTURES AND ALGORITHM</title>
      <dc:creator>Umukoro Emmanuel</dc:creator>
      <pubDate>Wed, 20 Apr 2022 22:05:12 +0000</pubDate>
      <link>https://forem.com/iemmanuel104/big-o-notation-in-python-kdk</link>
      <guid>https://forem.com/iemmanuel104/big-o-notation-in-python-kdk</guid>
      <description>&lt;p&gt;In computer science the Big O notation is described as a metric for analyzing system performance. it measures the run time of a function or code block as the input size increases, hence it determines the complexity of your algorithm using algebraic terms.&lt;br&gt;
This concept is peculiar to asymptotic notation as it was invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F502q3ucr0de7k49f6kpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F502q3ucr0de7k49f6kpt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The arithmetic notation is as shown above, there exists several asymptotic notations that describes the overall complexities of a computer algorithms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Big O&lt;/strong&gt; : As earlier stated it gives the function limit that describes whether the complexity will be "less than or equal to" another function, outcome or to the worst case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Big - Ω (Big Omega)&lt;/strong&gt; : This provides the bounds of which a function will be greater than or equal to another. It gives a complexity that is to be "at least more than" the best case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Big - Θ (Big theta)&lt;/strong&gt; : This asympt0tic notation gives a complexity between the worst and best case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yyijufsxbluqax5xx6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yyijufsxbluqax5xx6o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Algorithmic complexity is a measure of how long and how much memory space is required for an algorithm to complete a given an input of size n. This can be viewed in two distinct sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Time complexities of Algorithms&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ls325lzshs4lmf0rg4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ls325lzshs4lmf0rg4.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This signifies the total time required by an algorithm to run till its completion. Time Complexity is most commonly estimated by counting the number of elementary steps performed by any algorithm to finish execution. There are different algorithm run time complexities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O(1) - Constant Time complexity: 
Here there would be no change in the run time for any given input. Hence the output time depends on a constant value and does not depend on the input size. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TimeComplex = [1,4,5,2,7,3]
TimeComplex[2] #accessing any element takes constant time
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;O(N) - Linear Time complexity:
The run time increases as the size of the input increases. hence the time complexity is directly proportional to the size of the input data. It is suitable in algorithms that require a loop through array elements or where the algorithm has to sequentially read its entire input. For example, a function that adds up all elements of a list requires time proportional to the length of the list.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TimeComplex = [1,4,5,2,7,3]
for i in TimeComplex:
    print(i)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;O(N^2) [&lt;em&gt;Oh N squared&lt;/em&gt;] - Quadratic Time complexity:
This represents and algorithm whose run time performance is directly proportional to the square size of its input data. a common example is the nested for loop that looks attt every index in an array twice.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TimeComplex = [1,4,5,2,7,3]
for i in TimeComplex:
    for j in TimeComplex:
        print(i)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;O(LogN)  - Logarithmic Time complexity:
This represents and algorithm whose run time performance is proportional to the logarithm of its input data size. This is an algorithm to break a set of numbers into halves, to search a particular field and does not need to access all elements of its input. Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TimeComplex = [1,4,5,2,7,3]
for i in range(1,len(TimeComplex),4):
        print(TimeComplex[i])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;O(2^n) - Exponential Time Complexity:
They are referred to as Brute force algorithms whereby the algorithm growth rate doubles with each addition to the input (n), often iterating through all subsets of the input elements.
This is obviously a not optimal way of performing a task, since it will affect the time complexity. Brute-Force algorithms are used in cryptography as attacking methods to defeat password protection by trying random strings until they find the correct password that unlocks the system.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Space Complexity of Algorithms&lt;/strong&gt;
Space complexity is the amount of working storage needed by an algorithm, it describes the memory used by the algorithm (including the input values to the algorithm) to execute and produce the result.
It is a parallel concept to Time complexity and determines how the size of storage grows as input grows. To calculate the space complexity of an algorithm only the data space is considered. The data space refers to the amount of space used by the variables and constants.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96ank2msqnzj3l2pgrab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96ank2msqnzj3l2pgrab.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Space-complexity like time complexity, also plays a crucial role in determining the efficiency of an algorithm/program. If an algorithm takes up a lot of time, you can still wait, run/execute it to get the desired output. But, if a program takes up a lot of memory space, the compiler will not let you run it.&lt;/p&gt;

</description>
      <category>python</category>
      <category>datastructure</category>
      <category>algorithms</category>
      <category>100daysofcode</category>
    </item>
    <item>
      <title>RECURSIVE ALGORITHMS IN PYTHON</title>
      <dc:creator>Umukoro Emmanuel</dc:creator>
      <pubDate>Mon, 11 Apr 2022 21:15:00 +0000</pubDate>
      <link>https://forem.com/iemmanuel104/recursive-algorithms-in-python-190h</link>
      <guid>https://forem.com/iemmanuel104/recursive-algorithms-in-python-190h</guid>
      <description>&lt;p&gt;&lt;strong&gt;RECURSION DEFINITION&lt;/strong&gt;&lt;br&gt;
Recursion is a computing technique where by a function calls itself during execution and add data to a stack memory till it reaches the end of the recursive loop and returns back the required output. Recursive algorithms are suited for problem whose data structure can be scaled down to smaller versions and they provide and alternative to the conventional iterative methods of approaching problems. &lt;br&gt;
A visual example of how recursive algorithms work is the Russian doll, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--puPbLLsP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0iqo7wlzb0nu9ep2ouvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--puPbLLsP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0iqo7wlzb0nu9ep2ouvm.png" alt="Image description" width="732" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As depicted above, the Russian doll has a smaller version contained in it till you get to the smallest doll, so also does recursive algorithms, they help solve problems that can be fragmented into smaller pieces but in this case each stage is stored in a STACK memory and can be recalled through a recursive path. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pAd9uN_A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyseegcdyb8jm2t8vge2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pAd9uN_A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyseegcdyb8jm2t8vge2.png" alt="Image description" width="337" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RECURSION VS ITERATION&lt;/strong&gt;&lt;br&gt;
 The major difference between a recursive and iterative algorithm falls under three major criteria for review:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SPACE USAGE&lt;/strong&gt;: Recursive algorithms are not space efficient when compared to iterative algorithms, this is because No stack memory is required in the case of iterative algorithms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TIME EFFICIENCY&lt;/strong&gt;: When it come to being time efficient iterative algorithms are more time efficient compared to recursive ones. Recursive algorithms require time to pop and push elements to stack memory which is less time efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CODE SIZE&lt;/strong&gt;: Recursion reduces the size of the code while iteration prolongs it. Hence recursive algorithms are easy to code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;3 STEPS FOR WRITING RECURSION ALGORITHM &lt;/u&gt;&lt;br&gt;
Using an example of writing a recursive algorithm to determine the factorial of a number.&lt;/p&gt;

&lt;p&gt;i. BUILD A RECURSIVE CASE - THIS DESCRIBES THE FLOW (GOVERNING EQN AND LOGIC). &lt;br&gt;
&lt;code&gt;n*factorial(n-1)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;ii. SPECIFY A BASE CASE - THIS IS A STOPPING CRITERION TO PREVENT AN INFINITE LOOP&lt;br&gt;
&lt;code&gt;if n in [0,1]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;iii. IDENTIFY THE UNINTENTIONAL CASE - THIS PLACES A CONSTRAINT TO THE ALGORITHM&lt;br&gt;
&lt;code&gt;assert n&amp;gt;=0 and int(n) == n, "The number must be a positive integer only"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7gJeMugA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eth2gtgap1tqnxvq7h3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7gJeMugA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eth2gtgap1tqnxvq7h3f.png" alt="Image description" width="533" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code block above returns the factorial of the parameter 12 which is passed into the function.&lt;/p&gt;

&lt;p&gt;The function calls itself each time and adds the each result variable in the stack memory till the base case ia achieved.&lt;/p&gt;

</description>
      <category>python</category>
      <category>algorithms</category>
      <category>recursion</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
