<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Chiiraq</title>
    <description>The latest articles on Forem by Chiiraq (@chiiraq).</description>
    <link>https://forem.com/chiiraq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/chiiraq"/>
    <language>en</language>
    <item>
      <title>The Data Pipeline Dilemma: ETL vs ETL</title>
      <dc:creator>Chiiraq</dc:creator>
      <pubDate>Fri, 17 Apr 2026 19:49:27 +0000</pubDate>
      <link>https://forem.com/chiiraq/the-data-pipeline-dilemma-etl-vs-etl-3jn3</link>
      <guid>https://forem.com/chiiraq/the-data-pipeline-dilemma-etl-vs-etl-3jn3</guid>
      <description>&lt;h3&gt;
  
  
  Source
&lt;/h3&gt;

&lt;p&gt;The interdependence between warehousing and ETL has come a long way and to understand this, a brief recap of history is essential. In the 1970s and 1980s, companies had a problem where their data was scattered everywhere, with each department guarding its own information secretively. A man named William Inmon saw this chaos and proposed a radical fix that was one central home for all company data, which he called a data warehouse, where information would be organized, consistent and always available. But getting data into this new home was no easy feat. Early on, it was brutally manual work, with developers writing mountains of code just to move data from one place to another and mistakes were everywhere. Slowly but surely, this process grew smarter and more automated, eventually evolving into what we now call ETL, a sophisticated system that not only moves data but cleans and transforms it along the way, turning raw scattered information into something the warehouse could actually use. &lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure relationship
&lt;/h3&gt;

&lt;p&gt;While ETL provides the foundation for data movement, its behavior and importance shifts significantly depending on where that data is headed. The destination is a great factor for consideration and in the world of data infrastructure, there are three primary destinations worth understanding i.e the database, the data warehouse, and the data lake. Each one has a different relationship with ETL, and tracing that relationship reveals a great deal about how modern organizations manage and make sense of their information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Database:&lt;/strong&gt; This can serve as a start or end point for data after it goes through the process of transformation and loading or vice versa.&lt;br&gt;
&lt;strong&gt;2. Data warehouse:&lt;/strong&gt; A Data Warehouse is the storage repository where this structured, processed data after extraction and transformation is saved for analysis and business intelligence.&lt;br&gt;
&lt;strong&gt;3. Data Lake:&lt;/strong&gt; can be defined as a centralized repository that stores vast amounts of raw, unstructured, semi-structured, and structured data in its native format&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; The core difference between a data lake and a warehouse is:&lt;br&gt;
&lt;strong&gt;Data Warehouse (ETL):&lt;/strong&gt; is a Strict schema-on-write; It demands that data arrives already cleaned, structured and transformed before it is loaded in while:&lt;br&gt;
&lt;strong&gt;Data Lake (ELT/ETL):&lt;/strong&gt; utilizes Schema-on-read; data is ingested raw and transformed as needed, offering greater flexibility and scalability. This is because it is built to store data in its raw form.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This draws us to our order of the day, beggin the question: what is the difference between ETL and ELT and which one is recommended???? &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What is entailed in an ETL??
&lt;/h4&gt;

&lt;p&gt;The primary difference between an ETL and ELT lies in the order of operations. In ETL, the process starts by extraction then follows Transformation and lastly Loading. For the curious readers, i will highlight what happens at the different stages which is pretty much the same between the two processes but the difference lies in the order of execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extract:&lt;/strong&gt; Here, the data gathered from different sources e.g databases, CRM/ERP applications, APIs or flat files and often moved to a temporary staging area for processing.&lt;br&gt;
&lt;strong&gt;Transform:&lt;/strong&gt; The raw data is processed to ensure quality and compatibility. This involves cleaning, filtering, aggregating and reformatting to fit the target system's schema.&lt;br&gt;
&lt;strong&gt;Load:&lt;/strong&gt; The transformed data is written into the final target system such as a cloud data warehouse in this instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4vadrq6f3918l0zjxcy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4vadrq6f3918l0zjxcy.jpg" alt="ETL visual" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What is entailed in an ELT??
&lt;/h4&gt;

&lt;p&gt;The ELT process is similar to the ETL, as was stated initally, the difference lies in the order of processes. In ELT, the loading process precedes the transformation thus allowing for uploading of data in its raw format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7a4d49inf5vkcdrq47h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7a4d49inf5vkcdrq47h.jpg" alt="ELT Visual" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Difference between ETL and ELT
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;ETL&lt;/th&gt;
&lt;th&gt;ELT&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Definition&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Data is extracted from a source system, transformed on a secondary processing server, and loaded into a destination system.&lt;/td&gt;
&lt;td&gt;Data is extracted from a source system, loaded into a destination system, and transformed inside the destination system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Extract&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Raw data is extracted using API connectors.&lt;/td&gt;
&lt;td&gt;Raw data is extracted using API connectors.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transform&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Raw data is transformed on a processing server.&lt;/td&gt;
&lt;td&gt;Raw data is transformed inside the target system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Load&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Transformed data is loaded into a destination system.&lt;/td&gt;
&lt;td&gt;Raw data is loaded directly into the target system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Speed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Time-intensive; data is transformed before loading into a destination system.&lt;/td&gt;
&lt;td&gt;Faster by comparison; data is loaded directly into a destination system and transformed in parallel.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code-Based Transformations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Performed on secondary server. Best for compute-intensive transformations and pre-cleansing.&lt;/td&gt;
&lt;td&gt;Transformations performed in-database; simultaneous load and transform; speed and efficiency.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maturity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Modern ETL has existed for 20+ years; its practices and protocols are well known and documented.&lt;/td&gt;
&lt;td&gt;ELT is a newer form of data integration; less documentation and experience.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Privacy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pre-load transformation can eliminate PII (helps for HIPAA).&lt;/td&gt;
&lt;td&gt;Direct loading of data requires more privacy safeguards.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintenance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Secondary processing server adds to the maintenance burden.&lt;/td&gt;
&lt;td&gt;With fewer systems, the maintenance burden is reduced.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Costs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Separate servers can create cost issues.&lt;/td&gt;
&lt;td&gt;Simplified data stack costs less.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Requeries&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Data is transformed before entering destination system; therefore raw data cannot be requeried.&lt;/td&gt;
&lt;td&gt;Raw data is loaded directly into destination system and can be requeried endlessly.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Lake Compatibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No, ETL does not have data lake compatibility.&lt;/td&gt;
&lt;td&gt;Yes, ELT does have data lake compatibility.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Output&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Structured (typically).&lt;/td&gt;
&lt;td&gt;Structured, semi-structured, unstructured.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Volume&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ideal for small data sets with complicated transformation requirements.&lt;/td&gt;
&lt;td&gt;Ideal for large datasets that require speed and efficiency.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Real-Time Analytics for Business Insights:&lt;/strong&gt; Businesses  need up-to-the-minute data to make quick decisions in dynamic environments. With real-time ETL processes, data is extracted, transformed, and loaded as it’s generated, allowing companies to respond to market changes, optimize supply chains, and track customer behaviors instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Data Migration for System Upgrades:&lt;/strong&gt; As businesses grow, they often need to migrate data from legacy systems to modern platforms. ETL plays a key role in data migration, ensuring data is moved from one system to another without losing integrity or consistency. This process includes extracting data from the old system, transforming it to meet the new system’s requirements, and loading it into the new environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Customer Personalization in E-Commerce:&lt;/strong&gt; Customer data is a goldmine for e-commerce businesses looking to offer personalized shopping experiences. ETL processes can integrate and transform customer data from multiple touchpoints, such as websites, mobile apps, and social media, into a single profile. This enables e-commerce companies to offer personalized product recommendations, marketing campaigns, and customer experiences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Predictive Maintenance in Manufacturing:&lt;/strong&gt; predictive maintenance is critical to reducing downtime and preventing costly breakdowns. ETL processes collect and transform data from IoT sensors and machinery to predict when equipment needs maintenance. This helps manufacturers reduce operational disruptions and extend the life of machinery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Ensuring Compliance and Data Governance:&lt;/strong&gt; Businesses that handle sensitive data, such as in healthcare or finance, must comply with strict regulatory requirements. ETL processes help ensure that data is transformed and stored in compliance with regulations like GDPR, HIPAA, and CCPA. ETL can also be used to implement data governance policies, ensuring that only authorized personnel have access to specific data sets.&lt;/p&gt;

&lt;p&gt;All said and done, I bet it should be evident now which process has an upper hand based on the provided evidence and usecases(ELT). Why you ask?? this is due to its superior speed, scalability and flexibility particularly for big data and real time analytics. As always, Until next time, keep your data clean and your terminal keen. Peace ma'dudes.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>dataengineering</category>
      <category>architecture</category>
      <category>database</category>
    </item>
    <item>
      <title>From Chaos to Dashboard: How Power BI Analysts Turn Data Disasters into Decisions That Actually Stick</title>
      <dc:creator>Chiiraq</dc:creator>
      <pubDate>Mon, 06 Apr 2026 18:17:20 +0000</pubDate>
      <link>https://forem.com/chiiraq/from-chaos-to-dashboard-how-power-bi-analysts-turn-data-disasters-into-decisions-that-actually-2cg7</link>
      <guid>https://forem.com/chiiraq/from-chaos-to-dashboard-how-power-bi-analysts-turn-data-disasters-into-decisions-that-actually-2cg7</guid>
      <description>&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;As a data related practitioner be it engineering, science or whatever your cup of tea is, we often receive files that look like a dataset but act like a hostage situation. But here's the thing about being a data professional, there is no negotiating team coming and the hostage negotiator is you. The hostage is your KPIs. And the ransom? Your ability to turn this crime scene of a spreadsheet into something stakeholders can nod at during a Tuesday morning meeting. Power Bi has proven to be a resourceful tool in the translation of this messy data to actionable insights through the use of a couple of tools that are made available. The process can simply broken down into three key stages that we will get into in a moment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Cleaning.&lt;/li&gt;
&lt;li&gt;Data Enrichment.&lt;/li&gt;
&lt;li&gt;Data Visualization.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Data Cleaning.
&lt;/h3&gt;

&lt;p&gt;This being the very first stage and a vital bit of the process, the first step involved in this phase would be to check on the validity of the data given. If you can not trust the data given then the outputs and visualizations will all be falsehoods decorated in fancy charting. The ideal tool for this task would be ** Power Query **. So what is power query?? according to the internet, it is defined as a powerful ETL (Extract, Transform, Load) engine that cleans, shapes, and connects to data from hundreds of sources before loading it into the data model. Here is an image to help visualize it :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohq367zih5cc8sxru87q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohq367zih5cc8sxru87q.png" alt="Power Query Image visualisation" width="800" height="187"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;some of the tasks that may occur here include but are not limited to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Removing duplicates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3xw6dwr3l1rlhvu6ujv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3xw6dwr3l1rlhvu6ujv.png" alt="removing duplicates from the dataset in a column." width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Changing data types e.g changing money to respective currencies&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj8ae0aipvy5o04stxxd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj8ae0aipvy5o04stxxd.png" alt="Changing data types from the dataset in a column." width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Replacing values e.g missing fields with null values&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh75o0cmmrb30flc5zyul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh75o0cmmrb30flc5zyul.png" alt="Replacing Values from the dataset in a column." width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Splitting columns e.g combined name can be split into first, second and surname.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq12983m9arw5i7yz1zf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq12983m9arw5i7yz1zf.png" alt="splitting Values from the dataset in a column." width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Data Enrichment
&lt;/h3&gt;

&lt;p&gt;In this phase the cleaned data is transformed into the actual insights needed from the data through enrichment. That begs the question: What is data enrichment? Data enrichment is the process of enhancing your cleaned dataset by adding context, relationships and calculated meaning that the raw data alone couldn't provide. Cleaning makes the data trustworthy, enrichment makes it useful.&lt;br&gt;
The tool for this phase would be Data Analysis Expressions(DAX). This is a library of functions and operators that can be combined to build formulas and expressions in PowerBi. Some of the DAX functions may include the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Building Relationships:
Connecting multiple tables together in Power BI's Model View so data can flow and interact correctly across your report.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhzyxd11675tuk2qegel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhzyxd11675tuk2qegel.png" alt="Defining relationships" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp4zoyl2w39vuf4f39x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp4zoyl2w39vuf4f39x2.png" alt="Mapping the relationships created" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creating a Date Table:&lt;br&gt;
Building a dedicated calendar table that unlocks time-based analysis like month over month comparisons, year to date totals, and rolling averages.&lt;br&gt;
This is done by adding a new table and then defining the custom dates fro your table e.g DateTable = CALENDAR(DATE(2023, 1, 1), DATE(2025, 12, 31))&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Calculated Columns (DAX):&lt;br&gt;
Adding new columns derived from existing data — categorizing, flagging, or combining fields to give your dataset more descriptive depth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Measures (DAX):&lt;br&gt;
Dynamic calculations that respond to filters and context — KPIs, aggregations, variances, and time intelligence functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Conditional Columns &amp;amp; Grouping (Power Query)&lt;br&gt;
Rule based categorization and summarization applied before the data even enters the model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Merging &amp;amp; Appending Queries&lt;br&gt;
Joining or stacking tables in Power Query to consolidate data from multiple sources into a unified, analysis-ready structure.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Data Visualisation
&lt;/h3&gt;

&lt;p&gt;This is the phase whereby the data is organised into consumable metrics for the audience. This is achieved through use of common graphics, such as charts, plots, infographics and even animations often organised into a dashboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkji6ahfg7846mi41z2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkji6ahfg7846mi41z2i.png" alt="Dashboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  conclusion
&lt;/h2&gt;

&lt;p&gt;What Power BI gives you is leverage. It turns a three-week manual nightmare into a repeatable, auditable, scalable process. It means the next time that file lands in your inbox, you're not panicking — you're executing. Until next time, keep your data clean and your terminal keen. Peace ma'dudes.&lt;/p&gt;

</description>
      <category>data</category>
      <category>powerfuldevs</category>
      <category>dataanalytics</category>
      <category>datascience</category>
    </item>
    <item>
      <title>stacy okoth</title>
      <dc:creator>Chiiraq</dc:creator>
      <pubDate>Mon, 06 Apr 2026 11:20:27 +0000</pubDate>
      <link>https://forem.com/chiiraq/stacy-okoth-327b</link>
      <guid>https://forem.com/chiiraq/stacy-okoth-327b</guid>
      <description></description>
    </item>
    <item>
      <title>Schema Design Patterns: Because Even Data Needs Good Architecture</title>
      <dc:creator>Chiiraq</dc:creator>
      <pubDate>Mon, 02 Feb 2026 04:00:03 +0000</pubDate>
      <link>https://forem.com/chiiraq/schema-design-patterns-because-even-data-needs-good-architecture-139n</link>
      <guid>https://forem.com/chiiraq/schema-design-patterns-because-even-data-needs-good-architecture-139n</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;INTRODUCTION&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the ever growing world of data, it exists in absolute chaos; records scattered across multiple systems each telling a slightly different version of the same story. Untouched, the data cannot organize itself into neat and logical structures but instead, it duplicates and contradicts itself and yet, from this chaos, businesses need answers. Regardless of how terrifying it is, this is the natural state of data i.e (raw, unstructured and exponentially growing). This is where the data experts comes in to play, to bring order to this chaos and their most powerful tool you ask? the schema!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What exactly is data modelling??&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;According to Joe Reis and Matt Housley, (no, they're not made up dudes - authors, fundamentals of data engineering):&lt;br&gt;
"This is the process of creating a visual representation of either a whole information system or part of it to commuicate connections between data points and structures." &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is a Schema??&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A schema is the organization or structure for a database, defining how data is organized and how the relations among data are associated.A well designed schema is the foubdation of query perfomance and data integrity in analytical systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is the importance of good data modelling??&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Good data modelling ensures:&lt;br&gt;
a) &lt;strong&gt;&lt;em&gt;Improved database perfomance&lt;/em&gt;&lt;/strong&gt;: Statistical research has shown that well designed data models can improve report perfomance by up to 90%.Statistical research has shown that well designed data models can improve report perfomance by up to 90%. well i wouldn't know about you but to me that's astronomical figures also makinig it easier to find new opportunities for optimization and are equally easier to diagnose.&lt;/p&gt;

&lt;p&gt;b)&lt;strong&gt;&lt;em&gt;Improved application quality&lt;/em&gt;&lt;/strong&gt;: the data modelling gives your orgnisation a clear vision for hoe data can fill your business needs.&lt;/p&gt;

&lt;p&gt;c)&lt;strong&gt;&lt;em&gt;Improves data quality&lt;/em&gt;:&lt;/strong&gt; the data modelling process establishes rules for monitoring data quality and identifies any redundancies or omissions eliminating the hustle in cleaning large data sets.&lt;/p&gt;

&lt;p&gt;d)&lt;strong&gt;&lt;em&gt;Enables better documentation&lt;/em&gt;:&lt;/strong&gt; it enables consistent documentation which simplifies database maintenance while simultaneously preserving operational efficiency.&lt;/p&gt;

&lt;p&gt;e)&lt;strong&gt;Saves time and money:&lt;/strong&gt; it empowers businesses to achieve quicker times to market by catching errors early.&lt;/p&gt;

&lt;p&gt;note: These are just some of the perks that great data modelling provides, the scope goes on and on, i could continue listing them down but let's get to the juicy part of the steak, no??&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;SCHEMAS&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In schemas, database tables will have a primary key or a foreign key, which will act as unique identifiers for individual entries in a table. These keys are used in SQL statements to join tables together, creating a unified view of information.Schema diagrams are particularly helpful in showing relationships between tables and they enable analysts to understand the keys that they should join.&lt;/p&gt;

&lt;p&gt;While there are several schemas existent, we will primarily focus on the star schema and the snowflake schema. Why you ask?? the two represent the optimal design patterns for the vast majority of analytical workloads in relational database management systems and power bi.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;star schema&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A star schema is a type of relational database schema that is composed of a single, central fact table that is surrounded by dimension tables.It can have any number of dimension tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuj6iwu4qfakvn9okj5n.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuj6iwu4qfakvn9okj5n.gif" alt="Star schema featuring a many to one relationship" width="500" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;snowflake schema&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The snowflake schema consists of one fact table that is connected to many dimension tables, which can be connected to other dimension tables through a many to one relationship.&lt;br&gt;
Tables in snowflake schema are usually normalized to the 3rd normal form. Each dimension table represents exactly one level in a hierarchy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxi3hzie7un3afiir6c2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxi3hzie7un3afiir6c2.gif" alt="Snowflake schema" width="534" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;starflake schema&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A starflake schema is a combination of a star schema and a snowflake schema.Starflake schemas are snowflake schemas where only some of the dimension tables have been normalized. Starflake schemas are normalized to remove any redundancies in the dimensions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt7z6brmef60lbaioq1o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt7z6brmef60lbaioq1o.gif" alt="Starflake schema" width="659" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TO NOTE:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Fact table:&lt;/em&gt; the central,primary table in a star schema that stores quantitative, numerical data.&lt;br&gt;
&lt;em&gt;Dimension table:&lt;/em&gt;  a table that stores the descriptive, textual or contextual data about business entites.&lt;br&gt;
&lt;em&gt;Relationship:&lt;/em&gt; this is a logical link between two or more tables that share common data, primarily established using primary and foreign keys. The main types are one to one, one to many and many to many.&lt;/p&gt;

&lt;p&gt;Throughout this article, we have covered the journey from absolute chaos to now refined structures and schemas through good modelling but the real mastery comes from practice and making design decisions.Every schema you design teaches you something. Every relationship you define deepens your understanding of how data flows through business processes.Until next time, keep your data clean and your terminal keen. Peace madudes.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;citations:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A) Reis, J., &amp;amp; Housley, M. (2022). Fundamentals of Data Engineering: Plan and Build Robust Data Systems. O'Reilly Media, p. 156.&lt;br&gt;
b) Kleppmann, M. (2017). Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems. O'Reilly Media, p. 39.&lt;/p&gt;

</description>
      <category>database</category>
      <category>schema</category>
      <category>datascience</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>Introduction to Linux for Data Engineers, Including Practical Use of Vi and Nano with Examples</title>
      <dc:creator>Chiiraq</dc:creator>
      <pubDate>Sun, 01 Feb 2026 19:16:51 +0000</pubDate>
      <link>https://forem.com/chiiraq/introduction-to-linux-for-data-engineers-including-practical-use-of-vi-and-nano-with-examples-5a75</link>
      <guid>https://forem.com/chiiraq/introduction-to-linux-for-data-engineers-including-practical-use-of-vi-and-nano-with-examples-5a75</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When most people hear the term Linux, the imagery that comes to mind quite often is a room full of tech savy geeks hunched over their keyboards typing away in nerd-anese. But what if i told you that Linux is way more than just a playground for programmers?? In the vast data cosmos, linux is largely the backbone that supports data driven decisions powering hundreds and thousands of businesses today. Let's dive into breaking down this operating system into bite sized chunks for any aspiring/beginning data engineers, shall we??&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the importance of linux for data engineers??
&lt;/h2&gt;

&lt;p&gt;Linux can be considered the soft white underbelly for data engineers, offering both incredible perfomance and flexibility while at the same time having a steep learning curve as it is command line dependent forcing you to get handsy with commands and functions as opposed to the usual operating system that dumbens it down making it more user/beginner friendly by using more direct icons and navigation techniques. So what exactly are the perks of using linux for a data engineer?&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Perfomance&lt;/strong&gt;: For it to be compatible to data engineers, it is essential that is has the capability to handle large volumes of data and in record time.&lt;br&gt;
2.&lt;strong&gt;Compatibility&lt;/strong&gt;: a large number of data engineering tools and frameworks such as Apache hadoop, Spark,fink and loads of other varieties run natively on linux making it borderline unbeatable.&lt;br&gt;
3.&lt;strong&gt;Scalability&lt;/strong&gt;: data is an ever growing entity and thus demands that its environment is just as flexible and capable of adapting to increased workloads.&lt;br&gt;
4.&lt;strong&gt;Open Source&lt;/strong&gt;: linux allows for engineers to use and customize the system to their needs.&lt;br&gt;
5.&lt;strong&gt;Community Support&lt;/strong&gt;: When a technology has many users, you will find abundant learning materials, discussion and help forums readily available.&lt;/p&gt;

&lt;p&gt;Basic linux commands form the foundation of a data engineer's career and understanding them is essential for working with data systems. some of these commands may include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pwd - print working directory:used to show the current working directory.&lt;/li&gt;
&lt;li&gt;ls -list: this shows files and directories in the current directory.&lt;/li&gt;
&lt;li&gt;cd - change directory: used to navigate between directories.&lt;/li&gt;
&lt;li&gt;mkdir - make directory: this is used to c reate a new directory.&lt;/li&gt;
&lt;li&gt;rm - Remove: this can be used to delete files or directories.-&lt;/li&gt;
&lt;li&gt;touch : this creates an empty file in the current directory.&lt;/li&gt;
&lt;li&gt;cat : this disolays the contents within a file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This list above includes several commands that are frequently redundant in your day to day operations. Proficiency builds over time and consistent exposure to the systems will enhance your ability to work efficiently.&lt;/p&gt;
&lt;h2&gt;
  
  
  Text Editors
&lt;/h2&gt;

&lt;p&gt;Linux make further use of text editors but for today's article, we shall briefly dive into 2 :  &lt;strong&gt;&lt;em&gt;nano&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;vim&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Nano
&lt;/h4&gt;

&lt;p&gt;This is a straightforward, user friendly command line tet editor designed for ease of use. It displays commands at the bottom of the screen, making it more accessible for beginners and is commonly used for quick file edits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpycbd56mswu7fsyqj8tk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpycbd56mswu7fsyqj8tk.png" alt="visual for the commands at the bottom at the screen:" width="800" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following commands are various use cases for the nano text editor:&lt;/p&gt;
&lt;h5&gt;
  
  
  Creating a file using nano:
&lt;/h5&gt;

&lt;p&gt;To create a file using nano the command "nano" followed by the file name and its extension is used. e.g :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano testfile.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB: .txt is the file extension and is relative to the file being created/used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqt5zex0et8oely1znwt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqt5zex0et8oely1znwt.png" alt="The image shows the created file under the given directory." width="712" height="43"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOTE: Most of the commands in Nano are executed using key combinations, typically involving Control (CTRL) button or the alternate (ALT) key, rather than single-key commands. Some of the possible key combinations may include but are not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + Y&lt;/strong&gt; - to move down one page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + V&lt;/strong&gt; - to move up one page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + O&lt;/strong&gt; - to save a file i.e write out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + X&lt;/strong&gt; -to exit nano.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + R&lt;/strong&gt; -to insert/read another file into the current one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + K&lt;/strong&gt; - to cut marked text.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTRL + U&lt;/strong&gt; - to paste previously cut/copied text.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Vi
&lt;/h4&gt;

&lt;p&gt;This is a powerful, modal text editor that comes pre installed on virtually all linux systems. It operates in different modes and is known fo rits efficiency once mastered. It uses keyboard commands rather than menus for all operations.&lt;/p&gt;

&lt;h5&gt;
  
  
  Creating a file using vi:
&lt;/h5&gt;

&lt;p&gt;The syntax for file creation can be carried over from nano. i.e&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi testfile.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;TO NOTE:&lt;/strong&gt; &lt;br&gt;
A)Vi is case-sensitive, so a lowercase letter and an uppercase letter may have different meanings within the editor.&lt;/p&gt;

&lt;p&gt;B)To open an existing file in Vi, the same command format is used. It is important to ensure that the filename matches exactly the one used during creation otherwise, Vi will create a new file. Additionally, filenames should not contain spaces, as this may result in unintended file creation.Some of the commands for vi include but are not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;i&lt;/strong&gt; - Insert before cursor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I&lt;/strong&gt; - Insert at beginning of line&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;a&lt;/strong&gt; - Append after cursor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A&lt;/strong&gt;- Append at end of line&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;o&lt;/strong&gt; - Open new line below&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;O&lt;/strong&gt; - Open new line above&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;s&lt;/strong&gt; - Substitute character (delete char and enter insert mode)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S&lt;/strong&gt; - Substitute entire line&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I&lt;/strong&gt; - to enter insert mode and enable typing in the created file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ESC&lt;/strong&gt; - to return to command mode.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;:WQ&lt;/strong&gt; - to save and exit the file in vi back to command mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Through this article, we have explored why linux is an indispensable tool for data engineers and how the basic commands form the foundarion of professional data work, while the command line may seem daunting at first, i am reminded that every expert was once a beginner and the journey of a thousand miles only sets pace from the first step! Until next time, keep your data clean and your terminal keen, peace.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>linux</category>
      <category>vim</category>
      <category>nano</category>
    </item>
    <item>
      <title>GIT STARTED: A Noob's Guide To Version Control</title>
      <dc:creator>Chiiraq</dc:creator>
      <pubDate>Sun, 18 Jan 2026 10:04:46 +0000</pubDate>
      <link>https://forem.com/chiiraq/git-started-a-noobs-guide-to-version-control-2hf3</link>
      <guid>https://forem.com/chiiraq/git-started-a-noobs-guide-to-version-control-2hf3</guid>
      <description>&lt;h3&gt;
  
  
  INTRODUCTION
&lt;/h3&gt;

&lt;p&gt;We might all have been there, if not, expect to be there: where, you ask??? while working on a project, we have made changes before on "perfectly working code" and as the saying goes 'if it works, don't touch it' but that is rarely the case. Code is always a balance of imperfect perfection, sometimes changes are done quite often and it is never a surety that it runs as expected to. This is where &lt;strong&gt;version control&lt;/strong&gt; comes in.The first question you'll have would be what is version control??&lt;/p&gt;

&lt;h4&gt;
  
  
  WHAT IS VERSION CONTROL AND WHAT'S IT'S RELEVANCE ??
&lt;/h4&gt;

&lt;p&gt;Version control can be defined as a system that can be utilised to track changes to files overtime, allowing you to recall specific versions of the program much later on.&lt;/p&gt;

&lt;p&gt;It can be utilised for several different use cases including but not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tracking Accountability - Shows who made what changes and when, making it easy to identify who introduced bugs or brilliant features.&lt;/li&gt;
&lt;li&gt;Enables collaboration - Allows multiple people to work on the ame project simultaneously without overwriting other's changes.&lt;/li&gt;
&lt;li&gt;Backup - It acts as a complete audit trail showing how your project evolves overr time. 
Among many other use cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are endless examples of platform specific version control softwares such as gitlab, git and bitbucket but in this article i will take a keen interest in git and github. More specifically we will cover hoe to push and pull code.&lt;/p&gt;

&lt;h4&gt;
  
  
  HOW TO PUSH CODE TO GITHUB
&lt;/h4&gt;

&lt;p&gt;You are already in flow state and have made progress on the program you are creating, a repository has already been created and all that remains is pushing your code to the platform so you can track your progress. How do you do that exactly??&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;open gitbash and on the terminal, you can use the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;git push-sends commits to the remote repository&lt;/li&gt;
&lt;li&gt;origin- The nickname for your Github repository&lt;/li&gt;
&lt;li&gt;Main - The branch you're pushing the code to.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you refresh your github repository a .txt file with your project in the repositorty.&lt;/p&gt;

&lt;h4&gt;
  
  
  HOW TO PULL CODE FROM GITHUB
&lt;/h4&gt;

&lt;p&gt;Changes may have been made to the project and you might need to retrieve the code and review. Similarly, you might be retrieving your work to continue working on a project. TO do this, follow the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git pull origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;once the the command is given the following happens:&lt;br&gt;
Git fetches changes from GIthub and the changes are merged into your local code.&lt;/p&gt;

&lt;p&gt;If the above article didn't help much, here's a video you can utilise to map your way around the terminal commands better: &lt;a href="https://www.youtube.com/watch?v=yxvqLBHZfXk" rel="noopener noreferrer"&gt;hope this helps&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
