<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: GeraldM</title>
    <description>The latest articles on Forem by GeraldM (@geraldm).</description>
    <link>https://forem.com/geraldm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/geraldm"/>
    <language>en</language>
    <item>
      <title>ETL vs ELT: Which one should you use and why?</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sat, 18 Apr 2026 08:47:20 +0000</pubDate>
      <link>https://forem.com/geraldm/etl-vs-elt-which-one-should-you-use-and-why-1j5i</link>
      <guid>https://forem.com/geraldm/etl-vs-elt-which-one-should-you-use-and-why-1j5i</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Data is the new gold. Data drives business in organizations. As the amount of data, data sources and data types in organizations grow, there is importance in making use of this data in analytics to derive business insights. This lead to the emergence of data engineers who process the raw messy data into clean, fresh and reliable data ready for business use.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ETL?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58o8mlteqezb9fwkf7p5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58o8mlteqezb9fwkf7p5.png" alt=" " width="800" height="303"&gt;&lt;/a&gt; ETL stands for Extract, Transform and Load.&lt;/p&gt;

&lt;p&gt;This is a process that data engineers use to extract data from from different sources, transform the data into a usable and trusted resource and load that data into the systems end where users can access and use downstream to derive insights and make business decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does an ETL work?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Extract&lt;/strong&gt;&lt;br&gt;
The first step is to extract data from different systems such as business systems, APIs, sensor data, databases and others. Different systems have differently structured outputs. There are different ways to perform extraction:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Partial Extraction:&lt;/strong&gt; This is where source systems notify you of when a record has been changed and thus only extract that new record.&lt;br&gt;
&lt;strong&gt;2. Full extraction:&lt;/strong&gt; There are source systems that are not able to identify which data has been changed at all. In this case, a full extract is done and utilizing a copy of the last extract in the same format, changes that have been made can be identified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transform&lt;/strong&gt;&lt;br&gt;
The second step consists of transforming the raw data that has been extracted from sources into a format that can be used by different applications. Data is cleansed, mapped and transformed often to a specific/standardized schema to meet operational needs. At this stage, the data goes through several types of transformations to ensure quality and integrity of the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load&lt;/strong&gt;&lt;br&gt;
After data quality an integrity has been met, it is then loaded into a data storage facility eg. Database where it can be accessed by other users such as applications, analysts or management.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example: A retail stores with multiple stores across different regions collects daily sales data and the end of the day. The company &lt;strong&gt;extracts&lt;/strong&gt; raw/messy transaction data from the point-of-sale (POS) system which includes details such as products, quantities sold and store locations. After extracting this data from the POS system, the company &lt;strong&gt;transforms&lt;/strong&gt; the data by cleaning and standardizing it by doing activities such as removing duplicate records, correcting missing values and converting all timestamps into uniform format. Finally the cleaned data is then &lt;strong&gt;loaded&lt;/strong&gt; into a data storage system where it can be accessed by other users such as analysts and use it to generate reports and dashboards to show daily revenue, top-selling products or regional performance. With this, the company can make better decisions.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ELT?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0q2l8qegfxi9gbgv13p8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0q2l8qegfxi9gbgv13p8.png" alt=" " width="785" height="337"&gt;&lt;/a&gt; ELT stands for Extract Load and Transform.&lt;/p&gt;

&lt;p&gt;In ELT, data is first extracted from the source systems, loaded directly into a data storage systems such as a database and then transformations are done inside the data storage systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does ELT work?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Extract&lt;/strong&gt;&lt;br&gt;
Similar to an ETL, data is pulled from source systems(extraction).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load&lt;/strong&gt;&lt;br&gt;
Instead of going to a staging server/storage, the raw data is loaded directly into the target data storage system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transform&lt;/strong&gt;&lt;br&gt;
Once the raw data is inside the data storage system, it is transformed using SQL or specialized tools. The raw data is often preserved, while transformed versions are created for analysis.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example: A financial company utilizing mobile applications, ATMs and online banking systems collects all this large volumes of transactions data and instead of cleaning the data immediately, they first &lt;strong&gt;extract&lt;/strong&gt; all the raw transactions logs as they are and store them &lt;strong&gt;(load)&lt;/strong&gt; in a data storage systems such as a cloud server. Once the raw data is stored centrally, the company performs &lt;strong&gt;transformations&lt;/strong&gt; the data in storage. This can be in activities such as data cleaning by removing and fixing missing values, standardization of formats such as currencies and timestamps and joining with other data sets such as fraud watch lists. Since transformation happens after loading, the company is able to retain the original raw data.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ETL vs ELT
&lt;/h2&gt;

&lt;p&gt;So what is the difference between an ETL and ELT? &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ntwkx9il8vwt5zy0j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ntwkx9il8vwt5zy0j2.png" alt=" " width="620" height="342"&gt;&lt;/a&gt;&lt;br&gt;
The principal difference is in the order of operations. ETL stands for Extract, Transform, Load, meaning that it involves extracting data from the source first, transforming it into a usable format in a staging area and then transferring the usable data into a storage system where it can be accessed for analysis.&lt;/p&gt;

&lt;p&gt;Well this has been the standard in data processing for decades. With modern data storage capabilities, ELT as a processing option was introduced. ELT stands for Extract, Load, Transform, meaning that data is loaded into a storage systems as soon as it extracted and then transformed into a usable format as needed directly from the data storage system. &lt;/p&gt;

&lt;h3&gt;
  
  
  Key differences between ETL and ELT and how they affect data processing?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Availability:&lt;/strong&gt; In an ETL, data must be defined and transformed before storage meaning that only selected processed data is available while in an ELT, all raw data is stored first making it fully available for later use.&lt;br&gt;
&lt;strong&gt;2. Flexibility:&lt;/strong&gt; With ETL, changes require redefining the transformation logic while with ELT, data can be processes in different ways anytime as the original raw data is always available.&lt;br&gt;
&lt;strong&gt;3. Scalability:&lt;/strong&gt; With ETL its is harder to scale due to the upfront transformation costs while with ELT, especially in cloud environments, scalability is easy allowing ELT to handle large and growing data volumes efficiently.&lt;br&gt;
&lt;strong&gt;4. Speed:&lt;/strong&gt; ETLs are slower at start and faster for analysis due to the transformation of data before storage while ELTs are fast during extraction but slower when doing analysis as the data needs further processing.&lt;br&gt;
&lt;strong&gt;5. Storage:&lt;/strong&gt; ETLs consume less storage space as they only store processed data while ELTs require more storage as all the raw data is being stored.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;p&gt;Choosing between ETL and ELT often depends on aspects such as the specific industry an organization is in, data volume, and regulatory environment. &lt;/p&gt;

&lt;p&gt;Below are common real-world applications for each:&lt;/p&gt;

&lt;h3&gt;
  
  
  ETL Use Cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Retail Inventory Consolidation&lt;/strong&gt;&lt;br&gt;
A large retail chain extracts inventory data from multiple in-store systems and supplier databases. The ETL process standardizes product codes, removes duplicates, and reconciles stock discrepancies before loading the cleaned data into a central inventory system. This ensures accurate stock levels and prevents overstocking or stock outages across locations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Government Census Data Processing&lt;/strong&gt;&lt;br&gt;
Government agencies extract raw census data from surveys and regional databases. ETL processes validate entries, standardize formats (e.g addresses, demographics), and remove inconsistencies before loading the data into official statistical systems. This guarantees high data quality for policy-making and public reporting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Healthcare Data&lt;/strong&gt;&lt;br&gt;
Healthcare providers often extract patient records from fragmented Electronic Health Record (EHR) systems. Before loading this data into a centralized data storage system for clinical research, transformation pipelines must anonymize patient names and other Personal identifiable information (PII).&lt;/p&gt;

&lt;h3&gt;
  
  
  ELT use cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. IoT Sensor Data Processing&lt;/strong&gt;&lt;br&gt;
A manufacturing company collects continuous streams of sensor data from various machines (temperature, pressure, vibration etc). Using ELT, all raw sensor data is loaded into a data storage system first. Engineers then transform and analyze it later to detect anomalies, predict equipment failures, and optimize maintenance schedules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. E-commerce Customer Behavior Analytics&lt;/strong&gt;&lt;br&gt;
An e-commerce platform extracts and loads raw clickstream data, search queries, and browsing history directly into a data storage system. Analysts later transform this data to study user behavior, build recommendation systems, and personalize shopping experiences without losing any original interaction data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Social Media Sentiment Analysis&lt;/strong&gt;&lt;br&gt;
A marketing firm gathers raw social media posts, comments, and engagement metrics from multiple platforms. With ELT, this unstructured data is stored as-is in a central data storage facility. Analysts later transform it to extract sentiment, trends, and brand perception insights.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Considerations when choosing ETL/ELT tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Extent of data integration:&lt;/strong&gt; ETL/ELT tools can connect to a wide range of data sources and destinations. Data teams should opt for tools that offer a wide range of integrations.&lt;br&gt;
&lt;strong&gt;2.Customizability:&lt;/strong&gt; organizations should choose their ETL/ELT tools based on their requirements for customization and technical expertise of their teams. &lt;br&gt;
&lt;strong&gt;3.Cost structure:&lt;/strong&gt; When choosing an ETL/ELT tool, organizations should consider not only the cost of the tool itself but also the cost of the infrastructure and human resources needed to maintain the solution over a long term. Some tools have a higher upfront cost but lower downtime and maintenance requirements making them more cost-effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools used in an ETL
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What are ETL tools?&lt;/strong&gt;&lt;br&gt;
They are a set of tools that are used to extract, transform and load data from one or more sources into a target system or database. ETL tools are designed to automate and simplify the process of extracting data from various sources, transform it into a consistent and clean format, and load it into a target system in a timely and efficient manner. &lt;/p&gt;

&lt;p&gt;Some of the best and commonly used ETL tools include:&lt;br&gt;
&lt;strong&gt;1. Apache Airflow&lt;/strong&gt;&lt;br&gt;
Apache airflow is an open-source platform to programmatically author, schedule and monitor workflows. The platform features a web-based user interface and a commmand-line interface for managing and triggering workflows. Workflows are defined using directed acyclic graphs (DAGs), which allow for clear visualization and management of tasks and dependencies. Airflow has the ability to integrate to other tools such as Apache Spark and Pandas.&lt;br&gt;
&lt;strong&gt;2. Databricks Delta Live Tables (DLT)&lt;/strong&gt;&lt;br&gt;
It is an ETL framework built on top of Apache Spark that automates data pipelines (creating and managing them) allowing data teams to build reliable, maintainable and declarative pipelines with minimal effort. Delta Live Tables simplifies ELT by using a declarative approach where user define the what (the transformations and dependencies) and the system handles the how (execution, optimization and recovery).&lt;br&gt;
&lt;strong&gt;3. Oracle Data Integrator&lt;/strong&gt;&lt;br&gt;
Oracle data integrator is an ETL tool that helps users build, deploy and manage complex data warehouses. It comes with out-of-the-box connectors for many databases, including Hadoop, EREPs, CRMs, XML, JSON, LDAP, JDBC, and ODBC. Oracle Data Integrator includes Data Integrator Studio, which provides business users and developers with access to multiple artifacts through a graphical user interface. These artifacts offer all the elements of data integration, from data movement to synchronization, quality, and management.&lt;br&gt;
&lt;strong&gt;4. Hadoop&lt;/strong&gt;&lt;br&gt;
Hadoop is an open-source framework for processing and storing big data in clusters of computer servers. It is considered the foundation of big data and enables the storage and processing of large amounts of data. It consists of several modules, including the Hadoop Distributed File System (HDFS) for storing data, MapReduce for reading and transforming data, and YARN for resource management. Hadoop is costly due to the high computing power required.&lt;br&gt;
&lt;strong&gt;5. AWS Glue&lt;/strong&gt;&lt;br&gt;
It is a serverless ETL tools offered by Amazon. It discovers, prepares, integrates and transforms data from multiple sources for analytics use cases. When interacting with AWS Glue, users can use a drag and drop GUI, a Jupyter notebook or a python/Scala code.&lt;br&gt;
&lt;strong&gt;6. Azure Data Factory&lt;/strong&gt;&lt;br&gt;
It is a cloud based ETL service offered by Microsoft used to create workflows that move and transform data at scale. It comprises of series of interconnected systems that allow engineers to not only ingest and transform data but also design, schedule and monitor data pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools used in ELT
&lt;/h3&gt;

&lt;p&gt;Some of the best and commonly used ELT tools include:&lt;br&gt;
&lt;strong&gt;1. Airbyte&lt;/strong&gt;&lt;br&gt;
It is an open-source ELT platform that provides hundreds of connectors for databases and SaaS apps. It is widely used for flexibility and the ability to build or customize their own integrations.&lt;br&gt;
&lt;strong&gt;2. Fivetran&lt;/strong&gt;&lt;br&gt;
known for its automated, fully managed connectors, Fivetran ensures continuous data flow into cloud warehouses with minimal maintenance. It is often chosen by teams that value reliability and offloading connector maintenance.&lt;br&gt;
&lt;strong&gt;3. dbt (data built tool)&lt;/strong&gt;&lt;br&gt;
Rather than transforming data mid-flight during extraction, dbt transforms data inside the data warehouse using SQL. dbt lets data engineers and analytics engineers write modular, version-controlled, and tested SQL models. Every model is documented, dependencies are tracked, and every run produces a data lineage graph showing exactly how data flows from raw sources to final tables. It integrates with all major cloud data warehouses such as snowflake, Redshift and Databricks offering flexibility.&lt;br&gt;
&lt;strong&gt;4. Matilion&lt;/strong&gt;&lt;br&gt;
It is is a cloud-native ELT platform that integrates seamlessly with major data warehouses, including Snowflake, BigQuery, and Redshift. Its visual interface makes it easy for users to design workflows through a drag-and-drop environment, while more advanced users can leverage SQL-based transformations to handle complex data tasks.&lt;br&gt;
&lt;strong&gt;5. Hevo&lt;/strong&gt;&lt;br&gt;
It comes with over 150 connectors for extracting data from multiple sources. It is a low-code tool, making it easy for users to design data pipelines without needing extensive coding experience. Hevo offers a range of features and benefits, including real-time data integration, automatic schema detection, and the ability to handle large volumes of data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ETL and ELT each play a critical role in modern data workflows, differing primarily in where and how data transformation occurs. While ETL emphasizes preprocessing before storage, ELT leverages the scalability of modern data platforms to transform data after it is loaded. &lt;br&gt;
By 2026, ELT has become the dominant trend due to cheaper cloud storage and more powerful compute, enabling organizations to transform data within modern data warehouses for greater flexibility, faster insights, and reduced pipeline complexity. However, ETL remains important in environments with strict security, privacy, or compliance requirements, where data must be transformed before storage, making both approaches relevant depending on organizational needs.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>etl</category>
      <category>elt</category>
    </item>
    <item>
      <title>Getting Started with OpenClaw: A Step-by-Step Guide to Setting Up OpenClaw on a VPS</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:00:07 +0000</pubDate>
      <link>https://forem.com/geraldm/getting-started-with-openclaw-a-step-by-step-guide-to-setting-up-openclaw-on-a-vps-574d</link>
      <guid>https://forem.com/geraldm/getting-started-with-openclaw-a-step-by-step-guide-to-setting-up-openclaw-on-a-vps-574d</guid>
      <description>&lt;p&gt;&lt;strong&gt;DISCLAIMER!!: DO NOT INSTALL OPENCLAW ON YOUR PERSONAL MACHINE. IT IS STRONGLY RECOMMENDED THAT YOU USE A VIRTUAL MACHINE OR A VIRTUAL PRIVATE SERVER INSTEAD.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before you begin, ensure you have the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A Virtual Private Server (VPS) or Virtual Machine with:

&lt;ul&gt;
&lt;li&gt;Node.js version 22.16 or later installed&lt;/li&gt;
&lt;li&gt;Git installed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A Telegram account (or another supported messaging platform)&lt;/li&gt;
&lt;li&gt;An API key for the AI model of your choice&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is OpenClaw?&lt;/strong&gt;&lt;br&gt;
OpenClaw is a self-hosted gateway that connects your favorite messaging applications such as WhatsApp, Telegram and Discord, to your AI coding agents like Chat GPT. &lt;br&gt;
OpenClaw allows you to chat directly with your AI model using your messaging app and have it do tasks for you. &lt;br&gt;
&lt;strong&gt;How does it do tasks for you?&lt;/strong&gt;&lt;br&gt;
With OpenClaw, you can create Agents(action models) that use your AI model to perform task for you. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Set up a VPS(Virtual Private Server)&lt;/strong&gt;&lt;br&gt;
Using a cloud platform of you choice, set up a server/ virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnoqfquacgapt7vr8y4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnoqfquacgapt7vr8y4p.png" alt=" " width="800" height="189"&gt;&lt;/a&gt; In my case, I have setup my server on the Microsoft Azure platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Access your VPS via SSH(Secure Shell) using terminal&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrzfx7spo0vsrq5zrkq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrzfx7spo0vsrq5zrkq2.png" alt=" " width="800" height="369"&gt;&lt;/a&gt; From your machine, launch a terminal such as Git-bash, Powershell or in my case Mobaxterm.&lt;br&gt;
Use the command username@ip-address to connect to your server. You will be prompted for a password, provide it and then you will successfully have accessed your cloud server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Security&lt;/strong&gt;&lt;br&gt;
The reason it is advised that you run OpenClaw on a VPS or a machine that does not have any of your personal information is because: 1) The software is opensource meaning that it has anyone contributing to it's development including anyone with malicious intent. 2) OpenClaw requires elevated privileges on your machine as it interacts deeply with system memory or processes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflv2l178wh2puwfaidct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflv2l178wh2puwfaidct.png" alt=" " width="800" height="214"&gt;&lt;/a&gt; Click on the link to connect tailscale to the virtual machine&lt;br&gt;
Install Tailscale&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fizz5ge0xp14fywqap7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fizz5ge0xp14fywqap7.png" alt=" " width="800" height="158"&gt;&lt;/a&gt; Run the command above to start tailscale. You will be given a link to signup to tailscale&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahck0woe8k8hdrbxtwe9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahck0woe8k8hdrbxtwe9.png" alt=" " width="728" height="796"&gt;&lt;/a&gt; Login or create an account if you do not have one&lt;br&gt;
Choose the identity provider of your choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshzm4picp5u3271h5jvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshzm4picp5u3271h5jvc.png" alt=" " width="789" height="671"&gt;&lt;/a&gt; Select a device&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr1548f729mb4plbgqkz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr1548f729mb4plbgqkz.png" alt=" " width="800" height="687"&gt;&lt;/a&gt; click on the download button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvv9fv7au34yaa0rfw3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvv9fv7au34yaa0rfw3k.png" alt=" " width="784" height="522"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg1m24k63lsmxeal2kae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg1m24k63lsmxeal2kae.png" alt=" " width="792" height="510"&gt;&lt;/a&gt;  Install the application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4homm2o6xair5m5oh00x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4homm2o6xair5m5oh00x.png" alt=" " width="714" height="522"&gt;&lt;/a&gt; Look for the tailscale icon on your apps tray and click on it. It will redirect you to a login screen. Login using your credentials and then click the connect button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzqv7h4x8eznvi9kmvmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzqv7h4x8eznvi9kmvmn.png" alt=" " width="725" height="771"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2shewu0rr5yltff4759.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2shewu0rr5yltff4759.png" alt=" " width="800" height="148"&gt;&lt;/a&gt; You will get success and now you are now connected to a virtual private network.&lt;/p&gt;

&lt;p&gt;Navigate to the following file &lt;code&gt;sudo nano /etc/ssh/sshd_config&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5ppufik2nn9klfjfp5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5ppufik2nn9klfjfp5a.png" alt=" " width="800" height="397"&gt;&lt;/a&gt; Copy the tailscale IP given to your server by tailscale&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgj277sdo5mjj6lo1ksy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgj277sdo5mjj6lo1ksy.png" alt=" " width="800" height="162"&gt;&lt;/a&gt; &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6m88qlv44l1w29kgtfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6m88qlv44l1w29kgtfd.png" alt=" " width="800" height="101"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbvqtraq7t66sn6a9hej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbvqtraq7t66sn6a9hej.png" alt=" " width="800" height="203"&gt;&lt;/a&gt; Make the above changes to the file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz3lyeba2ndd4d3mdtym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz3lyeba2ndd4d3mdtym.png" alt=" " width="800" height="505"&gt;&lt;/a&gt; Create a new user&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8z25zwdbkxubl9i5e4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8z25zwdbkxubl9i5e4.png" alt=" " width="800" height="171"&gt;&lt;/a&gt; Add user to root&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vqrsoz7py21kxstaahl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vqrsoz7py21kxstaahl.png" alt=" " width="800" height="124"&gt;&lt;/a&gt; restart ssh service using the above command then logout the server using the &lt;code&gt;logout&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kqeih3cra0zuzl4y4q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kqeih3cra0zuzl4y4q7.png" alt=" " width="800" height="209"&gt;&lt;/a&gt; Access the server using the new username and the new tailscale IP address&lt;/p&gt;

&lt;p&gt;With this, we have secured the server such that no other traffic can reach it other than devices on the this tailscale VPN network. Our server is secure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Install OpenClaw&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw51uy9k68yd57d70hmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw51uy9k68yd57d70hmw.png" alt=" " width="800" height="493"&gt;&lt;/a&gt; Run the above command to install OpenClaw&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki6heh6b326v4gexb39f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki6heh6b326v4gexb39f.png" alt=" " width="800" height="173"&gt;&lt;/a&gt; Choose what you want to setup as local gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x254vv5mvh0d5eewvzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x254vv5mvh0d5eewvzu.png" alt=" " width="800" height="173"&gt;&lt;/a&gt; Then select the workspace directory as the default provided one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbme8on0zjmdo3g8h70ou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbme8on0zjmdo3g8h70ou.png" alt=" " width="800" height="770"&gt;&lt;/a&gt;  Select a model provider that you want OpenClaw to use. &lt;em&gt;Remember that to use the model, you will need an API key from which in many instances you are required to have a paid subscription.&lt;/em&gt;&lt;br&gt;
In my case, I am using &lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;OpenRouter&lt;/a&gt;, a website that allows you to connect to free available AI models out there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2sbqr5ld4ehuv650y05s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2sbqr5ld4ehuv650y05s.png" alt=" " width="735" height="227"&gt;&lt;/a&gt; After choosing the model, you will be prompted to provide an API key. Paste the API key that you generated from OpenRouter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30dzkhymdb6cy4ebs1zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30dzkhymdb6cy4ebs1zx.png" alt=" " width="800" height="477"&gt;&lt;/a&gt; Then you will need to choose a default model under OpenRouter. There is a long list, scroll until you find your chosen model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0495wdsnlnbxc73soeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0495wdsnlnbxc73soeq.png" alt=" " width="800" height="396"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bhq856jrse34h8jzp78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bhq856jrse34h8jzp78.png" alt=" " width="800" height="82"&gt;&lt;/a&gt; From OpenRouter, I am using a model known as StepFun 3.5 Flash. I chose it because I found it to have the best starts compared to the other available opensource models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxly779zbuja86k6w0dwi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxly779zbuja86k6w0dwi.png" alt=" " width="800" height="277"&gt;&lt;/a&gt; For gateway port and gateway bind, leave them as default. For the gateway auth, choose token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb01ofird6kwjtsqm7tw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb01ofird6kwjtsqm7tw4.png" alt=" " width="800" height="259"&gt;&lt;/a&gt; For tailscale exposure leave as off, then on how to provide gateway token choose the generate/store plaintext token option then leave the gateway token blank.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4zooh3tyncx0ugv3slu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4zooh3tyncx0ugv3slu.png" alt=" " width="800" height="631"&gt;&lt;/a&gt; Channel status; this is the social media application that you want to use to communicate with your model. Select yes to configure a channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F238kc67nszsnqo3ftzjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F238kc67nszsnqo3ftzjb.png" alt=" " width="800" height="208"&gt;&lt;/a&gt; Select a channel. For me, I will be using Telegram. OpenClaw then gives a small guide on how to obtain a Telegram bot token that you can use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygcea69lixpjlzksbfd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygcea69lixpjlzksbfd6.png" alt=" " width="800" height="383"&gt;&lt;/a&gt; Open your Telegram application and search for BotFather. Select the account "BotFather" with a tick mark (It's the Legitimate one).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf30mi5wx66oqavpcbdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf30mi5wx66oqavpcbdn.png" alt=" " width="800" height="689"&gt;&lt;/a&gt; In the chat with BotFather, type &lt;code&gt;/newbot&lt;/code&gt; to create a new bot. The username has to end with the word 'bot'. After choose a unique username, you will get the link to start a chart with the new bot you have created and also an API key (which you can enter in OpenClaw).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfdki9dy05am8kutgo1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfdki9dy05am8kutgo1y.png" alt=" " width="800" height="245"&gt;&lt;/a&gt; Enter the API key and when prompted to select a channel, scroll down and choose finished (the last option in the list).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckl3u5agpeayjbx6igmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckl3u5agpeayjbx6igmc.png" alt=" " width="800" height="348"&gt;&lt;/a&gt; When prompted for DM access policies select yes and pair with telegram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65hur08a32a0umrgk6sv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65hur08a32a0umrgk6sv.png" alt=" " width="800" height="364"&gt;&lt;/a&gt; For web search, choose a browser that you would like the agent to access and use for browsing (&lt;em&gt;I recommend using a new browser that you don't use for any of your personal activities/browsing&lt;/em&gt;). Then on your browser of choice get an API key and add it to OpenClaw.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb80cbamjs2rpnqr8v9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb80cbamjs2rpnqr8v9c.png" alt=" " width="800" height="273"&gt;&lt;/a&gt; For configuring skills, select 'No'. You can configure skill later after the setup is complete.&lt;br&gt;
&lt;strong&gt;What are skills in OpenClaw?&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;OpenClaw skills are plug-ins/additions for an OpenClaw AI agent: each skill teaches the agent how to do a specific task, such as searching the web, sending emails, controlling software, or running a workflow. Instead of the AI only chatting, skills give it extra abilities and step-by-step instructions so it can actually perform useful actions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jsdaoyd0ou8xeu7po34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jsdaoyd0ou8xeu7po34.png" alt=" " width="800" height="329"&gt;&lt;/a&gt; For hooks, choose, skip for now and for systemd lingering, choose enable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmydclvcnevup2v34cwi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmydclvcnevup2v34cwi.png" alt=" " width="800" height="231"&gt;&lt;/a&gt; For install gateway service, select yes and service runtime select the recommended 'Node' option. OpenClaw will install the gateway now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18gfi85707r4j6fmsmdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18gfi85707r4j6fmsmdo.png" alt=" " width="800" height="249"&gt;&lt;/a&gt; After the gateway is installed, the agent will now be alive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s8g9knu59bjwg6sdf85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s8g9knu59bjwg6sdf85.png" alt=" " width="800" height="252"&gt;&lt;/a&gt; It will start asking questions such as the name to call it and what you want it to be. Provide the answers. This information is stored in a file &lt;code&gt;BOTSTRAP.md&lt;/code&gt; this file defines what the agent is. Every time it starts, it reads this file to know/remember who it is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F956hwa8uf2a34v1nwdbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F956hwa8uf2a34v1nwdbv.png" alt=" " width="800" height="220"&gt;&lt;/a&gt; It will then ask questions about you. This information is stored in &lt;code&gt;USER.md&lt;/code&gt;, this tells the agent who you, the user is. Eg. It tells the agent to address you as (the name you want it to call you) and any other information you want it to know about you. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepzaatqp0bki74gxtfkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepzaatqp0bki74gxtfkg.png" alt=" " width="800" height="319"&gt;&lt;/a&gt; It then proceeds to ask about what behaviour you want it to have. This information is stored in a file &lt;code&gt;SOUL.md&lt;/code&gt;, it gives the agent a character. I chose to stop there as this information can be added directly to the files directly i.e &lt;code&gt;USER.md&lt;/code&gt;, &lt;code&gt;SOUL.md&lt;/code&gt;, &lt;code&gt;skills.md&lt;/code&gt; ... Type &lt;code&gt;/exit&lt;/code&gt; to leave the interactive cli.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiz8aqtomkl5zaahwq1gr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiz8aqtomkl5zaahwq1gr.PNG" alt=" " width="800" height="689"&gt;&lt;/a&gt; To access bot via telegram, press the link provided by BotFather as shown here and it will take to the inbox chart of your bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjh7rpszn08vdooocde3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjh7rpszn08vdooocde3.png" alt=" " width="800" height="260"&gt;&lt;/a&gt; After getting to the inbox chart of your bot, type &lt;code&gt;/start&lt;/code&gt; to start the bot. It will give you a pairing code to use to connect the bot to OpenClaw.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficrdi9lkv91ngm8kl00t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficrdi9lkv91ngm8kl00t.png" alt=" " width="800" height="82"&gt;&lt;/a&gt; Run the command &lt;code&gt;openclaw pairing approve telegram &amp;lt;your pairing code&amp;gt;&lt;/code&gt; to connect your telegram bot to OpenClaw. If you get an error "command not found" as I did.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmqggl4lemum9t9re9rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmqggl4lemum9t9re9rl.png" alt=" " width="800" height="666"&gt;&lt;/a&gt; Locate the file &lt;code&gt;openclaw&lt;/code&gt; on your host and navigate to that location. As shown, this is the location my file was at.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4upuxrzcqcffyej86sjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4upuxrzcqcffyej86sjs.png" alt=" " width="800" height="254"&gt;&lt;/a&gt; From the location of you file, execute the command and the connection to telegram will be approved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjgui666l8bhgcnezlns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjgui666l8bhgcnezlns.png" alt=" " width="800" height="394"&gt;&lt;/a&gt; And now we can chat with the bot from your telegram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Freftmd2bwae1wnsoyzv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Freftmd2bwae1wnsoyzv1.png" alt=" " width="800" height="135"&gt;&lt;/a&gt; If you want to access openclaw via web, run the command &lt;code&gt;ssh -N -L 18789:127.0.0.1:18789 user@serverIP&lt;/code&gt;. This command maps the exposed port on our VPS to our local machine.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; The server and your local machine have to be in the same network. And how is this possible, using the tailscale configuration that we deed. By activating tailscale on both the VPS server and your local machine, tailscale creates secure network and adds both your machines into that network. This implements security as well as enables your machine to access the VPS server seamlessly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7f9m9gzor5vzspshutf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7f9m9gzor5vzspshutf.png" alt=" " width="800" height="625"&gt;&lt;/a&gt; Now you are able to access openclaw on your machine, but this gives you an error. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje6ehr90hcapri52s2a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje6ehr90hcapri52s2a5.png" alt=" " width="800" height="208"&gt;&lt;/a&gt; To solve it, navigate to the file &lt;code&gt;openclaw&lt;/code&gt; on your host, run the following command &lt;code&gt;./openclaw dashboard --no-open&lt;/code&gt;. OpenClaw will give you a URL with a valid token. Paste the URL on your web browser. You will redirected to a tailscale website and prompted to login. Login, this authenticates you connection to OpenClaw using tailscale ( a security measure).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18jczz72y9c6k4fywi10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18jczz72y9c6k4fywi10.png" alt=" " width="800" height="446"&gt;&lt;/a&gt; Now you are able to access the openclaw UI from your web browser. Here you can have a UI interface to configure your agent instead of using the cli/terminal interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This guide demonstrated how to install OpenClaw on a Virtual Private Server, secure the instance with Tailscale, obtain an API key for an open-source AI model through the OpenRouter platform, and connect everything to a Telegram bot as the communication channel for the AI agent. With this foundation in place, the next step is to equip your OpenClaw agent with the skills it needs, enabling it to perform specific tasks and automate real workflows.&lt;/p&gt;

&lt;p&gt;This article served as an introduction to help you get started with OpenClaw and prepare you to build a more capable, customized AI agent. Happy building with OpenClaw, and enjoy creating an AI agent tailored to your own needs and ideas. &lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Connecting Power BI to PostgreSQL Database</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sun, 15 Mar 2026 20:40:58 +0000</pubDate>
      <link>https://forem.com/geraldm/connecting-power-bi-to-postgresql-database-3eif</link>
      <guid>https://forem.com/geraldm/connecting-power-bi-to-postgresql-database-3eif</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Power BI is a business intelligence and data visualization tool developed by Microsoft that allows user to connect to multiple data sources, transform and model data, and create reports and dashboards.&lt;/p&gt;

&lt;p&gt;Power BI is used to perform data analysis and provide business intelligence by allowing analysts to import data from various sources such as databases, clean and transform the data, create relationships using the data and perform calculations using DAX(data analysis expressions).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why do organizations connect Power BI to databases?&lt;/strong&gt;&lt;br&gt;
Organizations connect Power BI directly to databases because of reasons such as: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Databases store large volumes of data efficiently and connecting Power BI to these databases allows analysts to work with enterprise scale data.&lt;/li&gt;
&lt;li&gt;Connecting Power BI to a database provides data consistency ensuring that everyone in the organization works with the same data.&lt;/li&gt;
&lt;li&gt;Databases offer real-time or near real-time reporting as databases receive new data enabling analysis to also be update or real-time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why is it important to have SQL databases for storing and managing analytical data?&lt;/strong&gt;&lt;br&gt;
SQL databases store structured data in tables with rows and columns and allow efficient querying using Power BI which also uses SQL queries to perform actions such as filtering, aggregation and joins.&lt;/p&gt;

&lt;p&gt;In this article, we will be covering the process of connection Power BI to a local PostgreSQL database (a database located inside your local pc) and also connecting it to a cloud database (a database in a remote server) such as Aiven PostgreSQL.&lt;/p&gt;
&lt;h3&gt;
  
  
  Connecting Power BI to a Local PostgreSQL database
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Open Power BI Desktop&lt;/strong&gt;&lt;br&gt;
Once you have opened Power BI desktop, click on black report which will take you to the main page &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz60kto4qtmcs79uq78wj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz60kto4qtmcs79uq78wj.PNG" alt=" " width="800" height="448"&gt;&lt;/a&gt;&lt;br&gt;
On the main page, click on the Get Data icon. A popup window titled Get Data will appear, from here on the search bar type 'postgreSQL' to search for PostgreSQL database. Choose PostgreSQL database and click the connect button.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Enter Database connection Details&lt;/strong&gt;&lt;br&gt;
Another popup titled PostgreSQL database will appear prompting you to enter server, database and Data connectivity mode&lt;br&gt;
To locate my local PostgreSQL details, I will use a tool called DBeaver that visualizes my local database to get my database details.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsokvjbt922894raqcbc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsokvjbt922894raqcbc.PNG" alt=" " width="800" height="507"&gt;&lt;/a&gt;&lt;br&gt;
From this, I am able to get my server and database names&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2odx3zp24qlolu57ud3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2odx3zp24qlolu57ud3.PNG" alt=" " width="714" height="362"&gt;&lt;/a&gt;&lt;br&gt;
Fill the details, leave the database connectivity mode as is then click ok.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Authenticate to your Database&lt;/strong&gt;&lt;br&gt;
Here you enter your database name and a password that you created during the database PostgreSQL installation.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy6grezyjk6xcroc12o2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy6grezyjk6xcroc12o2.PNG" alt=" " width="727" height="326"&gt;&lt;/a&gt;&lt;br&gt;
Under 'select which level to apply these settings to', select 'localhost;postgres' then click connect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Load Tables to Power BI&lt;/strong&gt;&lt;br&gt;
Power BI authenticates to the database using the credentials you provided it with and upon a successful connection, a popup appears showing the existing tables in your database.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1qiwvgsbf7h0bnhkwza.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1qiwvgsbf7h0bnhkwza.PNG" alt=" " width="800" height="517"&gt;&lt;/a&gt;&lt;br&gt;
To load this tables to Power BI, mark the checkbox to the left of each of the tables and then click on the load button. Alternatively, you can select Transform Data to clean or modify the data in the Power Query Editor before loading it.&lt;/p&gt;

&lt;p&gt;Once the tables finish loading, navigate to the table view section on the left of your screen, to be able to see the tables you have loaded.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ic0lw2r9ownp7vmcnp2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ic0lw2r9ownp7vmcnp2.PNG" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;br&gt;
To toggle through the different tables, you can use the pane on the right of your screen and click on the table you want to view&lt;br&gt;
Now your data is successfully loaded to Power BI and you can begin performing analysis on it.&lt;/p&gt;
&lt;h3&gt;
  
  
  Connecting Power BI to a cloud PostgreSQL Database
&lt;/h3&gt;

&lt;p&gt;Not all everyone hosts their databases locally, some prefer to have their database hosted on the cloud. In such an instance, Power BI is also able to connect to those databases.&lt;br&gt;
I will be using a cloud provider known as Aiven, to illustrate this connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Open Power BI&lt;/strong&gt;&lt;br&gt;
Relaunch Power BI and navigate to get data, choose PostgreSQL and click connect.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiuwqwy91wtdlq7meu1v.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiuwqwy91wtdlq7meu1v.PNG" alt=" " width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Access Aiven&lt;/strong&gt;&lt;br&gt;
Login to your Aiven platform and navigate to your database instance.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs1g8thboq5ftyuq3txn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs1g8thboq5ftyuq3txn.PNG" alt=" " width="800" height="166"&gt;&lt;/a&gt; Click on the database to access your connection details which include: The host, port, database and username&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapciyy75bhfjwelt3zl5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapciyy75bhfjwelt3zl5.PNG" alt=" " width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Download the SSL CA certificate&lt;/strong&gt;&lt;br&gt;
From the details list provided, click the download icon next to the CA certificate to save it to a known location locally on your computer. &lt;/p&gt;
&lt;h4&gt;
  
  
  Why do we need the SSL certificate?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;The SSL certificate will be used to establish secure communication between your cloud database and your Power BI application. It also encrypts the data being transferred between your database and your Power BI application such that, in an event where your data is intercepted by a third party, they would not be able to see the contents of your data&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdyrab9rpnfp6alpqwqa.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdyrab9rpnfp6alpqwqa.PNG" alt=" " width="800" height="60"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Enter Database Details&lt;/strong&gt;&lt;br&gt;
After locating your database details from Aiven, fill them into the PostgreSQL database popup window.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sbtm9s30sj0o091a4tq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sbtm9s30sj0o091a4tq.PNG" alt=" " width="714" height="365"&gt;&lt;/a&gt; Fill in your server name (Host) and then your database name leave the 'Data connectivity mode' as is.&lt;br&gt;
When entering the server name of a cloud server, we enter the name and the port separated by a colon&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your-service-project.aivencloud.com:port
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After entering your details click 'OK' to be prompted to enter your username and password for authentication&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf1p8h7tp55z2f6vvnub.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf1p8h7tp55z2f6vvnub.PNG" alt=" " width="739" height="334"&gt;&lt;/a&gt;&lt;br&gt;
Enter your details and password then click connect&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Error&lt;/strong&gt;&lt;br&gt;
When you click connect, you will get an error. This error occurs because Power BI cannot locate the CA certificate we downloaded. We have to add to a location where it can be accessed by Power BI.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2dpqzica8h5kld263uc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2dpqzica8h5kld263uc.PNG" alt=" " width="535" height="250"&gt;&lt;/a&gt;&lt;br&gt;
To resolve the error, click windows plus R keys on your keyboard and type the following:&lt;br&gt;
&lt;code&gt;certmgr.msc&lt;/code&gt;&lt;br&gt;
Press enter and this will take you to certmgr (manages certificates on your pc)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt4ssiat4qbtrk0eh8cr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt4ssiat4qbtrk0eh8cr.PNG" alt=" " width="800" height="302"&gt;&lt;/a&gt; Select 'Trusted Root Certification Authorities' and on the left side single click on 'Certificates' to chose select it.&lt;br&gt;
Right click on it and chose 'All task' then 'Import'&lt;br&gt;
A popup window will appear and you will be asked to choose the file you want to import. Click on browse to navigate to where you stored the ca.pem file we downloaded. &lt;em&gt;If you navigate to the location you stored it and it's not appearing, under file type, change to all file and it will appear&lt;/em&gt;. Select the file by double clicking on it&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhwadvmuhl3ombxubwqm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhwadvmuhl3ombxubwqm.PNG" alt=" " width="800" height="515"&gt;&lt;/a&gt; The file path will appear on the popup and then click on next to continue.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F896cburjr093o8gpoumo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F896cburjr093o8gpoumo.PNG" alt=" " width="800" height="469"&gt;&lt;/a&gt; After the certificate has been verified, click finish and to import the certificate.&lt;br&gt;
Then restart Power BI for the changes to take effect.&lt;br&gt;
Now after restarting Power Bi and following the process we used, now our connection will be successful and we will see the tables in our databases ready for loading.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgglhk0paqoofhhd6esmt.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgglhk0paqoofhhd6esmt.PNG" alt=" " width="800" height="481"&gt;&lt;/a&gt; Select the tables you want to load and click the load button.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Relationships between tables
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What are Relationships?&lt;/strong&gt;&lt;br&gt;
Relationships between tables define how data stored in different tables is connected. The relationships are typically created using keys such as primary keys and foreign keys. &lt;em&gt;For example, in our assignment customers table (_see on image below&lt;/em&gt;) we have the customer information, while an assignment sales table we have records of transactions. The assignment sales table can include a foreign key referencing the customer ID from the assignment customers table, linking each order to a specific customer._ This structure allows databases to avoid duplication of data while maintaining logical connections between datasets, making it easier for analytics tools such as Microsoft Power BI to combine and analyze data from multiple tables.&lt;/p&gt;

&lt;p&gt;Power BI automatically Detects relationships between tables based on matching columns and establishes the primary and foreign keys.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihucpq5gsjbeladyon75.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihucpq5gsjbeladyon75.PNG" alt=" " width="800" height="448"&gt;&lt;/a&gt; To toggle the relationships view, click on the shown icon.&lt;br&gt;
From our image, we can see that Power BI has already detected the primary and secondary keys in our tables and established relationships off those.&lt;br&gt;
We can edit the relationships by double clicking on the line joining the tables.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkkicw2kd239c0c8aa9j.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkkicw2kd239c0c8aa9j.PNG" alt=" " width="800" height="433"&gt;&lt;/a&gt; After editing your the relationship between the tables, click on save to save the new relationship.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why are SQL skills important for a Power BI analyst?&lt;/strong&gt;&lt;br&gt;
SQL skills are essential for Power BI analysts because most organizational data is stored in relational databases that use SQL. Although Power BI provides graphical tools for building reports, analysts often need SQL to directly access and manipulate data in the source database. This SQL skills enable analysts to explore database structures, understand relationships between tables and write efficient queries that retrieve only the data required for analysis. This improves performance and ensures that the datasets imported into Power BI are accurate, relevant and optimized for reporting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How SQL helps analysts perform analysis on Data before building visualizations.&lt;/strong&gt;&lt;br&gt;
Data analysts perform analysis on data such as retrieving specific records, filtering datasets using conditions, performing aggregations such as sums, averages, and counts, and joining multiple tables to combine related data. After performing this analysis, they visualize their results using dashboards. Analysts use SQL queries to do this analysis. &lt;em&gt;For example, an analyst might write an SQL query to calculate the total sales for products grouped by categories to get a view of the best performing product categories.&lt;/em&gt; By performing these transformations at the database level, analysts reduce the amount of processing required inside Power BI and create cleaner, well-structured datasets that are easier to visualize and analyze in dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we have covered what Microsoft Power BI tool is and how it used in organizations. We've gone through how to connect Power BI to a local and cloud PostgreSQL, load the tables in the chosen database and how relationships between these tables are established using the tool. Finally we understood why SQL skills are important to any analyst using Power BI and what is the benefit of using SQL on Power BI. &lt;br&gt;
With this we can now become better analysts, able to work with both local and on cloud PostgreSQL databases on Power BI. Additionally we are now able to work better and efficiently on Power BI by utilizing SQL queries.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>postgres</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>Understanding Joins and Window Functions in SQL</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sun, 01 Mar 2026 21:07:56 +0000</pubDate>
      <link>https://forem.com/geraldm/understanding-joins-and-window-functions-in-sql-4mdk</link>
      <guid>https://forem.com/geraldm/understanding-joins-and-window-functions-in-sql-4mdk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article, we will explore two of the most powerful and widely used features in SQL: JOINs and Window Functions. We will begin by understanding what they are and how they work, and then walk through practical examples to see when, where, and why they are used in real-world scenarios.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let's start with joins:&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Joins?
&lt;/h2&gt;

&lt;p&gt;In Structured Query Language (SQL), a join is a clause used to combine rows from two or more tables based on a related column between them. The purpose of a joins is to retrieve data that is spread across multiple tables into a single table, providing a unified view. &lt;br&gt;
&lt;em&gt;Example: In database with an orders and customers table, a join can be used to answer questions such as which customer placed an order.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Type of Joins
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. INNER JOIN
&lt;/h3&gt;

&lt;p&gt;It combines two or more tables based on a specified common column with matching values and only returns the set of records that have a match in all the involved tables. The rows that don't have a match in the other table(s) are excluded from the result set.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae2ir3jsh904aeoz0832.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae2ir3jsh904aeoz0832.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvfen55qnr09hr1puur4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvfen55qnr09hr1puur4.png" alt=" " width="774" height="296"&gt;&lt;/a&gt; Above, are tables customers and orders&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select  first_name, last_name
from article.customers
inner join article.orders on customers.customer_id = orders.customer_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pcgw1xz55cc7l3vg9vs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pcgw1xz55cc7l3vg9vs.png" alt=" " width="800" height="284"&gt;&lt;/a&gt; Using INNER JOIN based on the specified column customer_id on both the customers and orders table, we are able to join the tables, getting a result of only the records which have customer_ids on both tables.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. LEFT JOIN
&lt;/h3&gt;

&lt;p&gt;It retrieves all rows from the left (first) table and matching rows from the right (second) table. If a row in the left table has no corresponding match in the right table based on the join condition, the result will contain NULL values for the columns of the right table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
Using our previous tables&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--LEFT JOIN
select first_name, last_name, order_date
from article.customers
left join article.orders on customers.customer_id = orders.customer_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e4n4bw8lmn3m0x5m0ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e4n4bw8lmn3m0x5m0ji.png" alt=" " width="800" height="358"&gt;&lt;/a&gt; We perform a left join on the orders table with the customers table based on the customer_id condition. The query returns all the customers but some with the NULL value on the date column as they are not on the orders table(they have never made an order).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The left join is also known as the left outer join.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. RIGHT JOIN
&lt;/h3&gt;

&lt;p&gt;It retrieves all rows from the right (second) table and matching rows from the left (first) table. If a row in the right table has no corresponding match in the left table based on the join condition, the result will contain NULL values for the columns of the left table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--RIGHT JOIN
select first_name, last_name, order_date
from article.customers
right join article.orders on customers.customer_id = orders.customer_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygu8g2mniirzm59nto8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygu8g2mniirzm59nto8w.png" alt=" " width="800" height="278"&gt;&lt;/a&gt; Now, on performing the RIGHT JOIN on our tables, we get a result of all the rows from the right table (orders) that have a match on the left table (customers). Compared to the LEFT JOIN, we lose the NULL values as those rows don't have match in the left table (customers).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The right join is also known as the right outer join.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. FULL OUTER JOIN
&lt;/h3&gt;

&lt;p&gt;It returns all rows from both the left and right tables, combining matching records and using NULL values for columns  where there is no match&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- FULL OUTER JOIN
select first_name, last_name, email, phone_number,order_id, order_date, book_id
from article.customers
full outer join article.orders on customers.customer_id = orders.order_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqyl7t4qsubixh1hzjw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqyl7t4qsubixh1hzjw5.png" alt=" " width="800" height="307"&gt;&lt;/a&gt; When we perform a FULL OUTER JOIN on the customers and orders tables, the SQL query returns all the rows from both tables and filling NULL values for there is no match.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Now, let's look at window functions:&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Window Functions?
&lt;/h2&gt;

&lt;p&gt;In SQL, Window functions perform calculations across a set of table rows related to the selected row without merging these rows into a single output/value. Unlike traditional output functions such as SUM() and COUNT() which reduce multiple rows to one, window functions return a value for each row in the original result set. &lt;br&gt;
They operate over a &lt;strong&gt;window&lt;/strong&gt; (specific set of rows) defined by the OVER() clause.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupjbzci1juv5qzun1757.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupjbzci1juv5qzun1757.png" alt=" " width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Components of SQL Window Functions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Select:&lt;/strong&gt; This defines the columns you want to select from the table_name(&lt;em&gt;The columns you select, create your window&lt;/em&gt;).&lt;br&gt;
&lt;strong&gt;Function:&lt;/strong&gt; This is the window function you want to use.&lt;br&gt;
&lt;strong&gt;Over Clause:&lt;/strong&gt; This defines the partitioning and ordering of rows and can be applied with functions to compute aggregated values.&lt;br&gt;
&lt;strong&gt;Partition by:&lt;/strong&gt; This divides rows into partitions based on specified expressions. It is suitable for large datasets because it makes them simpler to manage.&lt;br&gt;
&lt;strong&gt;Order by:&lt;/strong&gt; This is a specified order expression to define the order in which rows will be processed within each partition.&lt;br&gt;
&lt;strong&gt;Output column:&lt;/strong&gt; This is the name you give to your output column.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of window functions in SQL
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Aggregate window functions:&lt;/strong&gt; They calculate aggregates over a window of rows while retaining individual rows.&lt;br&gt;
&lt;strong&gt;2. Ranking window functions:&lt;/strong&gt; They provide rankings of rows within a partition based on a specific criteria.&lt;br&gt;
&lt;strong&gt;3. Value window functions:&lt;/strong&gt; They are used to assign to rows, values from other rows. It is usually possible to replicate the values of these functions using two nested queries, hence they are not that common compared to aggregate and ranking window functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegpkzyydwletzk1a8trz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegpkzyydwletzk1a8trz.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;We have a table employees containing information of different employees from different departments.&lt;/p&gt;

&lt;p&gt;Let's use a ranking window function RANK() (which provides a unique rank to each row while skipping duplicates), an OVER() to define the partitioning and ordering of rows using PARTITION BY and ORDER BY then define an output column that will contain out results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahlomi3idlowjrvqfehu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahlomi3idlowjrvqfehu.png" alt=" " width="800" height="637"&gt;&lt;/a&gt;In our table, we have used the RANK() function to rank employees by department. We have applied (PARTITION BY department) to partition our results in the different departments in our table,  (ODER BY salary DESC) to order the results by descending salary value then we have output our RANK() using rank_by_salary column.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxinoreta7zy80s9bqae9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxinoreta7zy80s9bqae9.png" alt=" " width="800" height="625"&gt;&lt;/a&gt;After the ranking of the Finance department is over, we can see a new ranking starting for the HR department.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d7kavg2o2x6vv842lej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d7kavg2o2x6vv842lej.png" alt=" " width="800" height="739"&gt;&lt;/a&gt;Here we are now using the aggregate window function AVG() to get the average salary for each department and comparing it to each of the employees salary in different departments&lt;/p&gt;

&lt;p&gt;To learn more on the other window functions, click here &lt;a href="https://www.geeksforgeeks.org/sql/window-functions-in-sql/" rel="noopener noreferrer"&gt;Window functions&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;From this article, we have gone through and understood what are joins and window functions are. Through clear explanations and practical, real-world examples, you’ve learned what they are, the different types available, how they work, and exactly how to write clean, efficient SQL queries to achieve your desired results. Mastering these powerful tools will dramatically improve your ability to analyze, transform and retrieve data with precision. Start applying them in your own projects today, the more you practice, the more natural they will feel.&lt;br&gt;
&lt;strong&gt;Happy querying!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>sql</category>
      <category>postgres</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How Analysts Translate Messy Data, DAX, and Dashboards into Action Using Power BI</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sun, 08 Feb 2026 11:45:10 +0000</pubDate>
      <link>https://forem.com/geraldm/how-analysts-translate-messy-data-dax-and-dashboards-into-action-using-power-bi-553o</link>
      <guid>https://forem.com/geraldm/how-analysts-translate-messy-data-dax-and-dashboards-into-action-using-power-bi-553o</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Today's world is driven by data. As it's said "Data is the new gold". Businesses generate vast amounts of data from various sources and often in disorganized "messy" forms which is overwhelming and not meaningful. For meaningful insights that can enable decision making to be obtained from this data, it needs to be transformed. Microsoft power BI is a powerful tool that enables users to do exactly that by cleaning, analyzing and visualizing data efficiently. This article covers how analysts leverage Power BI to particularly handle messy data, utilizing Data Analysis Expressions(DAX) and creating dashboards to convert complex data sets into actionable information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Messy Data?&lt;/strong&gt;&lt;br&gt;
Messy data refers to raw, unstructured or inconsistent datasets that often contain error, duplicates, missing values or incomplete formats making it challenging to work with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsu8cwtrspo7c9qx87yzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsu8cwtrspo7c9qx87yzx.png" alt=" " width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is DAX?&lt;/strong&gt;&lt;br&gt;
Data Analysis Expressions(DAX) is the formula language used in power BI to create calculated columns, measures and queries that enhance data models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbq3xecwdoqsft3vdxndr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbq3xecwdoqsft3vdxndr.png" alt=" " width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are Dashboards?&lt;/strong&gt;&lt;br&gt;
Dashboards are visual interfaces that present key metrics and trends using charts, tables and Key Performance Indexes(KPIs), allowing users to quickly understand the data and by doing this, they can be able to make informed decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrntcwtqrwe1x64qv1bo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrntcwtqrwe1x64qv1bo.png" alt=" " width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Transforming messy data into actionable insights
&lt;/h3&gt;

&lt;p&gt;How does Power BI do this? Lets break it down into five steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cleaning and preparing messy data&lt;/strong&gt;&lt;br&gt;
The process begins with importing and cleaning messy data. An analyst begins by connecting Power BI to various sources such as Excel files, database or web APIs. Once imported, in Power BI lies a tool called Power Query. With Power Query, an analyst is able to transform data by; removing blanks/missing values, standardizing date, currency and text formats or remove duplicates and irrelevant columns. This step is very critical as it ensures that data is reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Modeling data&lt;/strong&gt;&lt;br&gt;
Once data is cleaned, it needs to be structured correctly. This is achieved by creating relationships between tables and then utilizing schemas such as star or snowflake(To learn more about &lt;a href="https://dev.to/geraldm/understanding-schemas-and-data-modeling-in-power-bi-1okk"&gt;Schemas&lt;/a&gt;). A strong data model makes DAX simpler, dashboards faster and insights more accurate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Utilizing DAX to add intelligence&lt;/strong&gt;&lt;br&gt;
Here, Data Analysis Expressions(DAX) come into play to add intelligence. DAX formulas allow an analysts to create new columns that are needed, measures and tables to perform advanced operations. With DAX analysis comes alive by answering questions like; &lt;em&gt;How does this month compare against the previous one?&lt;/em&gt; or &lt;em&gt;How does change in revenue affect profit?&lt;/em&gt; These expressions enable dynamic filtering and context-aware calculations, turning static data into responsive insights that adapt to user interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Building Dashboards that influence action&lt;/strong&gt;&lt;br&gt;
Dashboards visualize the data. With Power BI, an analyst is able to visualize data in different formats ranging from bar charts, line graphs, cards, slicers, tables and maps. By utilizing the report view feature, analysts are able to create effective, efficient and interactive reports that users can interact with. Interactive dashboards helps user identify patterns or opportunities quickly.&lt;/p&gt;

&lt;p&gt;But are all reports effective and efficient? the answer is no. So, what makes good dashboards then?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A good dashboard should answer business questions not just visualizing data.&lt;/li&gt;
&lt;li&gt;While creating dashboards, utilize consistent colors, layouts and avoid clutter.&lt;/li&gt;
&lt;li&gt;Create room for interactivity through filters, slicers and drill downs.&lt;/li&gt;
&lt;li&gt;Use clear key performance indexes(KPIs), trends and comparisons.&lt;/li&gt;
&lt;li&gt;Make your dashboards fast enough. Nobody is going to use a dashboards that takes long to load.&lt;/li&gt;
&lt;li&gt;And lastly, avoid over engineering, avoid creating dashboards that are complex to understand.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0owiz8qifrfd3gfopkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0owiz8qifrfd3gfopkk.png" alt=" " width="800" height="476"&gt;&lt;/a&gt; &lt;em&gt;Fig: An example of a good dashboard. Allows for interactivity, a consistent layout, not cluttered and answers business questions&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1vkg95zz5vor78ug3t8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1vkg95zz5vor78ug3t8.png" alt=" " width="800" height="401"&gt;&lt;/a&gt; Fig: An example of an over engineered, messy power BI dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Translating Insights into Decisions&lt;/strong&gt;&lt;br&gt;
Lastly, the success of power BI lies in action and decision making. To achieve this, as an analyst, you make sure to include annotations and tooltips to explain insights, align dashboards with business goals and continuously refine your dashboards based on feedback and new data. With this, insights will not stay in reports rather, they will be key in influencing decisions, actions and outcomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Power BI is more than a reporting tool, it is a key component in business success. By cleaning messy data, applying DAX and designing effective and efficient dashboards, analysts translate raw data into information that drives insightful actions. A good analyst is one who has mastered and leverages Power BI to deliver actionable insights from raw messy data.&lt;/p&gt;

</description>
      <category>powerbi</category>
      <category>datascience</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Understanding Schemas and Data modeling in Power BI</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sun, 01 Feb 2026 20:07:18 +0000</pubDate>
      <link>https://forem.com/geraldm/understanding-schemas-and-data-modeling-in-power-bi-1okk</link>
      <guid>https://forem.com/geraldm/understanding-schemas-and-data-modeling-in-power-bi-1okk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Power BI is a business intelligence and data visualization tool developed by Microsoft that allows user to connect to multiple data sources, transform and model data, and create reports and dashboards. In this article we will go through the various schemas and data models in Power BI.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let's begin with Schemas&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a schema?
&lt;/h2&gt;

&lt;p&gt;In Power BI, a schema refers to the structure and organization of data within a data model by defining how data is connected and related within a model. By understanding schemas, we are able to design the best data models that enable comprehensive analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Schemas in Power BI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Star Schema
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farxz0jeo7w97bspmfekq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farxz0jeo7w97bspmfekq.webp" alt=" " width="696" height="564"&gt;&lt;/a&gt;&lt;br&gt;
A star schema consists of one central fact table surrounded by dimension tables forming a star like pattern. It is a simple and commonly used schema in data warehousing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A fact table consists of measurable data like Sales, while a dimensions table holds descriptive attributes related to the facts like Customers and Date.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Star schemas are ideal for straightforward reporting and querying making them suitable for dashboards and summary reports.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Snowflake Schema
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwe29syld8j2mzmsq918.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwe29syld8j2mzmsq918.jpg" alt=" " width="740" height="571"&gt;&lt;/a&gt;&lt;br&gt;
A Snowflake schema is an extension of a star schema where dimension tables are further divided into sub-dimension tables (multiple related tables) creating a structure that looks like a snowflake. &lt;br&gt;
&lt;em&gt;For Example: A products dimension table can split into categories and brand names.&lt;/em&gt;&lt;br&gt;
This splitting of tables reduces data redundancy by introducing more tables and joins which saves on storage.&lt;/p&gt;

&lt;p&gt;Snowflake schemas are ideal in scenarios where storage optimization is critical and a detailed data model is a requirement.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Galaxy Schema (Fact Constellation Schema)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6tw7rwl88lxnp2y13ht.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6tw7rwl88lxnp2y13ht.jpg" alt=" " width="740" height="350"&gt;&lt;/a&gt;&lt;br&gt;
A Galaxy schema is also known as a fact constellation schema, it involves multiple fact tables that share dimension tables forming a complex interconnected constellation looking like data model.&lt;/p&gt;

&lt;p&gt;Galaxies schemas are ideal for large-scale enterprise environment where multi-process/interrelated metrics analytics is required. For example analysis of finance and operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to implement Schemas in Power BI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  a.  Star Schema
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup fact and Dimension tables:&lt;/strong&gt; Identify and created a central fact table and surrounding dimension tables. For example: A sales table being a fact table surrounded by a product and Customer table as dimensions tables as these tables provide descriptive information about the sale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Link tables:&lt;/strong&gt; Establish a link between the fact table and the dimensions table using foreign keys. For example, the sales table is linked to the customer table using the customer ID foreign key.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  b. Snowflake Schema
&lt;/h3&gt;

&lt;p&gt;1.&lt;strong&gt;Normalize Dimension tables:&lt;/strong&gt; Normalize the dimension tables by splitting them into related sub-tables. For example, splitting the product table into category and brand tables.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Create relationships:&lt;/strong&gt; Define the relationships between sub-tables and the main tables while maintaining referential integrity. For example the customer table can be split into city and Region tables where the table are related to the customers table through the city ID field.&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Optimize storage:&lt;/strong&gt; Use appropriate storage and indexing strategies to manage complex joins efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  c. Galaxy schema
&lt;/h3&gt;

&lt;p&gt;1.&lt;strong&gt;Identify Fact tables:&lt;/strong&gt; Determine the various fact tables needed for various business processes. For example a sales fact table and a shipping fact table.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Identify shared dimensions tables:&lt;/strong&gt; determine shared dimension tables that link the fact tables. For example, the sales fact table and the shipping fact table can be linked by a product dimensions table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Modeling
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is data Modeling?&lt;/strong&gt; &lt;br&gt;
A data model is the structure of the data that we create for businesses. While data modeling is the process of structuring and organizing data from various sources into a coherent semantic data mode by defining tables, establishing relationships between them, creating calculated columns and measures using data analysis expressions (DAX), setting hierarchies and optimizing for performance.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Data Models
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Conceptual data modeling
&lt;/h3&gt;

&lt;p&gt;They offer a big picture of what the system will contain, how it will be organized and which business rules are involved. Conceptual data modeling answers the following questions: What data do we need? What does the business desire? What data do we have access to? Where can we find this data?&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Logical data modeling
&lt;/h3&gt;

&lt;p&gt;They provide a greater detail about the concepts and relationships in the domain under consideration. They include facts (Events eg purchase) and dimensions (Actors eg customers) and how their relationships.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Physical Data modeling.
&lt;/h3&gt;

&lt;p&gt;They provide a schema for how data will be physically stored within a database. They offer a finalized design that can be implemented as a relational database. Physical data models answer the following questions:&lt;br&gt;
What are the columns of your data? What are the data types of your data? How are you storing this data? How can you compress this data to make it smaller?&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Modeling Process
&lt;/h2&gt;

&lt;p&gt;The following are a sequence of tasks to be performed in an iterative manner, building a workflow of creating data models.&lt;br&gt;
&lt;strong&gt;1. Identify entities:&lt;/strong&gt; Data modeling process begins with the identification of the things, events or concepts (entities) that are represented in the data set that is to be modeled.&lt;br&gt;
&lt;strong&gt;2. Identify key properties of each entity:&lt;/strong&gt; Each entity can be differentiated from others because it has one or more unique properties, called attributes. For example, an entity called “customer” might possess such attributes as a first name, last name and telephone number, while an entity called “address” might include a street name and number, a city, state, country and zip code.&lt;br&gt;
&lt;strong&gt;3. Identify relationships among entities:&lt;/strong&gt; Specify the nature of relationship each entity has with the others. For example, the relationship between a customer and address table is that the customer lives at the address.&lt;br&gt;
&lt;strong&gt;4. Map attributes to entities completely:&lt;/strong&gt; This ensures that the model reflects how the business will use the data.&lt;br&gt;
&lt;strong&gt;5. Assign the keys as needed and decide on a degree of normalization that balances the need to reduce redundancy with performance requirements.&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Normalization is a technique of organizing data models in which numerical identifiers, called keys are assigned to groups of data to represent relationships between them without repeating the data.&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;6. Finalize and validate the data model:&lt;/strong&gt; Data modeling is an iterative process that should be repeated and refined as business needs change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Data Modeling
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Hierarchical data modeling&lt;/strong&gt;&lt;br&gt;
In this model, each record has a single root parent which maps to one or more child tables creating a tree like format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Relational data modeling&lt;/strong&gt;&lt;br&gt;
In this model, data segments are explicitly joined through the use of tables, reducing database complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Entity-relationship (ER) data modeling&lt;/strong&gt;&lt;br&gt;
Formal diagrams are used to represent the relationships between entities in a database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Object-oriented data modeling&lt;/strong&gt;&lt;br&gt;
Objects are grouped in class hierarchies and have associated features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Dimensional data modeling&lt;/strong&gt;&lt;br&gt;
Designed to optimize data retrieval speeds for analytic purposes in a data warehouse. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why perform Data Modeling?&lt;/strong&gt;&lt;br&gt;
Data modeling makes it easier for developers, data architects, business analysts, and other stakeholders to view and understand relationships among the data in a database or data warehouse&lt;/p&gt;

&lt;p&gt;In addition to that, data modeling can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce errors in software and database development&lt;/li&gt;
&lt;li&gt;Improve application and database performance.&lt;/li&gt;
&lt;li&gt;Ease data mapping throughout the organization.&lt;/li&gt;
&lt;li&gt;Improve communication between developers and business intelligence teams.&lt;/li&gt;
&lt;li&gt;Ease and speed the process of database design at the conceptual, logical and physical levels.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we have gone through schemas and data modeling. Understanding different schemas in Power BI is crucial for designing efficient data models. Each schema has unique advantages and by choosing the right schema, you can improve query performance, data storage efficiency and data refresh operations. By mastering these schemas, you can create robust and scalable data models, enabling your organization to make data-driven decisions effectively.&lt;/p&gt;

&lt;p&gt;Data modeling is a critical foundation for effective data management and analytics as it provides a clear and structured way to organize data in alignment with business needs. By progressing through conceptual, logical and physical models, organizations can be able to move from high-level business requirements to detailed, implementable database designs.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>beginners</category>
      <category>data</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Introduction to Linux for Data Engineers, Including Practical Use of Vi and Nano with Examples</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sun, 25 Jan 2026 07:31:17 +0000</pubDate>
      <link>https://forem.com/geraldm/introduction-to-linux-for-data-engineers-including-practical-use-of-vi-and-nano-with-examples-1ian</link>
      <guid>https://forem.com/geraldm/introduction-to-linux-for-data-engineers-including-practical-use-of-vi-and-nano-with-examples-1ian</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction to Linux for Data Engineers, Including Practical Use of Vi and Nano with Examples&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Is Linux Important to Data Engineers?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Linux is a key pillar in data engineering because it creates a solid foundation for  nearly all modern data platforms by providing stability, performance, and tooling needed to build and operate data pipelines. Most data engineering tools such as Cloud services, Databases and tools such as Kafka run natively on Linux. Linux has a powerful command-line, scripting capabilities and process management which enables building, automation, monitoring and troubleshooting data workflows. As a data engineer, mastering Linux allows you to work closer to the systems that store and move data, leading to faster debugging, better performance and more resilient data pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intro to Linux&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
As a data engineer, you will be interacting with servers a lot. And where will these servers be located? On the cloud (remote location).&lt;/p&gt;

&lt;p&gt;Let’s start here: &lt;strong&gt;What is a server?&lt;/strong&gt; We can describe it as a computer or computing device dedicated to providing a specific service. Most of the time these devices have high computing power.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Cloud in computing?&lt;/strong&gt; Cloud is a collection of many servers that provide computing resources over the internet, while cloud services are the platforms that give access to these resources. The companies that offer them are known as &lt;strong&gt;cloud service providers (CSPs)&lt;/strong&gt;. Examples of major cloud service providers include &lt;strong&gt;Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;“&lt;em&gt;For example: you are working on a project on your computer and you realize that your computer does not have enough storage space and computing power to successfully do this project. Now let’s say this project will be completed within three months. To solve your challenge , you can choose to purchase a new computer with better storage and computing power. Would it be wise to do so? No. You go to your friend to ask them for help and explain to them your problem. Your friend tells you that he has a computer that has the resources you need but he is also using it. He then proposes to create an account for you on this computer which you can access and use via the internet as it is located at his house and use it to do your project when he will then delete your account. Now, you accessing your friend's computer over the internet and using it to do your project, that is cloud computing. Only that it comes at a fee&lt;/em&gt;”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic Linux commands&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
What is a directory: Referencing Windows, a directory is the equivalent of a folder on windows.&lt;/p&gt;

&lt;p&gt;The following are some basic linux commands to get you started:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;pwd&lt;/code&gt; – Shows current directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Displays the full path of the directory you’re currently in.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pwd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example output:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgku6f0k3o6pfdk0heol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgku6f0k3o6pfdk0heol.png" alt=" " width="530" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;mkdir&lt;/code&gt; – Create a directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Creates a new folder.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mkfuupexst68x1dazko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mkfuupexst68x1dazko.png" alt=" " width="515" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;cd&lt;/code&gt; – Change directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Move between directories.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd DataEng&lt;/code&gt;: we can now navigate to the directory we have created&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8s7nflayfy3g7vbsuae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8s7nflayfy3g7vbsuae.png" alt=" " width="492" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd ..&lt;/code&gt; : It takes back to the previous directory&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgwx6e98rt6pdf1rjakq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgwx6e98rt6pdf1rjakq.png" alt=" " width="710" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;ls&lt;/code&gt; – List files and directories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Shows files and folders in a directory.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob8iv5kx0mxufmvlcx9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob8iv5kx0mxufmvlcx9k.png" alt=" " width="526" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ls -la&lt;/code&gt;:detailed list and includes hidden files&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrxzbx7ezu405t411qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrxzbx7ezu405t411qw.png" alt=" " width="738" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;touch&lt;/code&gt; – Create an empty file&lt;/strong&gt;&lt;br&gt;
Creates a new empty file.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat1egle9fx9v756qg4hs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat1egle9fx9v756qg4hs.png" alt=" " width="525" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;cp&lt;/code&gt; – Copy files or directories&lt;/strong&gt;&lt;br&gt;
Copies files or folders from one location to another.&lt;/p&gt;

&lt;p&gt;Eg. Here we make copy our new empty file in the TestDirectory folder. With copying, the original file remains.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bh15hq1xnfpe8inrlwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bh15hq1xnfpe8inrlwx.png" alt=" " width="709" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;mv&lt;/code&gt; – Move or rename files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Moves files or renames them.&lt;/p&gt;

&lt;p&gt;Example: Here we have created a new file testfile2.py and used the mv command to move it to the TestDirectory. As you can see, with mv the file does not remain in the previous directory.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkuyzd42lcjzo4eyrr2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkuyzd42lcjzo4eyrr2m.png" alt=" " width="682" height="707"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;rm&lt;/code&gt; – Remove files or directories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deletes files or folders.&lt;br&gt;
&lt;code&gt;rm testfile.py&lt;/code&gt;: It deletes the file&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7646pf3dz5j39y8rmckw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7646pf3dz5j39y8rmckw.png" alt=" " width="569" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example: Here we have created a directory and then deleted it using &lt;code&gt;rm -r&lt;/code&gt;command&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18x2aq1fgz9bvpxqwnh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18x2aq1fgz9bvpxqwnh9.png" alt=" " width="582" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Deleted files do not go to a recycle bin. They are completely removed from the computer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;cat&lt;/code&gt; – View file contents&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Displays the contents of a file.&lt;/p&gt;

&lt;p&gt;Example: Here we are using cat to see the contents inside the file testfile.py&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbzh90285axb1hdfhgjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbzh90285axb1hdfhgjh.png" alt=" " width="693" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a user on Linux&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Who is a user on Linux?&lt;/strong&gt; A user on Linux is an account that represents a person, service, or process allowed to log in and interact with the system. Each user has a unique ID (UID), owns files and processes, and is granted specific permissions that control what they can access or modify on the system.&lt;/p&gt;

&lt;p&gt;Command to create a user: &lt;code&gt;sudo adduser 'username'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example: Here we are creating a user named TestUser&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffb97papmkslygkbcg20x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffb97papmkslygkbcg20x.png" alt=" " width="731" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Checking if the user has been created we use the command &lt;code&gt;id&lt;/code&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefhpvwxd6m016xkvbzin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefhpvwxd6m016xkvbzin.png" alt=" " width="746" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To navigate to switch to the user profile you have created, you use the command &lt;code&gt;su ‘username’&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo374gsvao9xw0v2z8syi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo374gsvao9xw0v2z8syi.png" alt=" " width="502" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvr7kej8zq06zw9nb2g3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvr7kej8zq06zw9nb2g3.png" alt=" " width="704" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you try to perform actions with the new user, you will get this notification&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5zfu7k87v8sg9dg1mwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5zfu7k87v8sg9dg1mwq.png" alt=" " width="704" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What is required now is to give user privileges to perform these actions by adding them to a user group with these privileges/rights. We do this by using the command &lt;code&gt;sudo usermod \-aG&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gd2van1f6in8b3io7bz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gd2van1f6in8b3io7bz.png" alt=" " width="737" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating and Editing files on Linux&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
There are two mainly used text editors on linux: &lt;code&gt;nano&lt;/code&gt; and &lt;code&gt;vim&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Nano&lt;/strong&gt;&lt;br&gt;
Going back to our file we created named testfile.py, lets edit it now.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nano ‘filename’&lt;/code&gt;: using this command, we now access the file and we can write/edit its contents.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof9j9cqh4q44xi39sk0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof9j9cqh4q44xi39sk0r.png" alt=" " width="699" height="66"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are taken to this interface where we can write into the file or make changes to the existing contents&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faddckoofkgdii4s9ehac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faddckoofkgdii4s9ehac.png" alt=" " width="726" height="790"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the bottom of the editor, there are guides:&lt;br&gt;&lt;br&gt;
&lt;code&gt;Ctrl \+ O then enter&lt;/code&gt;: saves the file&lt;br&gt;&lt;br&gt;
&lt;code&gt;Ctrl \+ U&lt;/code&gt;: Paste a line&lt;br&gt;&lt;br&gt;
&lt;code&gt;Ctrl \+ W&lt;/code&gt;: search for a word within the contents of the file&lt;br&gt;&lt;br&gt;
&lt;code&gt;Ctrl \+ X&lt;/code&gt;: to exit the editor&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Vim&lt;/strong&gt;&lt;br&gt;
To create a file using vim, we use the command &lt;code&gt;vim ‘filename’&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7kud5alvzskm0dtzf91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7kud5alvzskm0dtzf91.png" alt=" " width="672" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are then taken to an interface where we edit/write into the file&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4iap8sjlia5loje2gbws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4iap8sjlia5loje2gbws.png" alt=" " width="544" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To write into the file, we press &lt;code&gt;i&lt;/code&gt; to go to insert mode where we can write/edit contents on the file&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgnsjuxaq85ipb80vzoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgnsjuxaq85ipb80vzoi.png" alt=" " width="524" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After writing into the file, to exit insert mode, we press the esc key.&lt;/p&gt;

&lt;p&gt;To save the file, we type &lt;code&gt;:w&lt;/code&gt; then press enter to save the file.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctao8bvrxqfmcwdceaid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctao8bvrxqfmcwdceaid.png" alt=" " width="469" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To exit vim, we type &lt;code&gt;:q&lt;/code&gt; &lt;br&gt;
To save and exit vim at the same time, we combine the two and use &lt;code&gt;:wq&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can then use cat command to confirm the contents have been successfully written and saved on the file we have created.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf6d5232m5ycxc69l3i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf6d5232m5ycxc69l3i9.png" alt=" " width="680" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Having learned the commands above, you are now familiar with the core Linux skills essential for data engineering. You can confidently navigate the filesystem using the terminal, create and manage files and directories, and read from and write to files (tasks that are fundamental when working with data pipelines, configuration files, and scripts). These skills provide a strong foundation for operating data engineering tools and platforms that run primarily on Linux environments.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>dataengineering</category>
      <category>vim</category>
      <category>nano</category>
    </item>
    <item>
      <title>Git Bash (Pull and Push code, track changes and version control)</title>
      <dc:creator>GeraldM</dc:creator>
      <pubDate>Sat, 17 Jan 2026 19:42:28 +0000</pubDate>
      <link>https://forem.com/geraldm/git-bash-pull-and-push-code-track-changes-and-version-control-e8p</link>
      <guid>https://forem.com/geraldm/git-bash-pull-and-push-code-track-changes-and-version-control-e8p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub account&lt;/li&gt;
&lt;li&gt;Git bash application installed&lt;/li&gt;
&lt;li&gt;Git bash connected to GitHub&lt;/li&gt;
&lt;li&gt;An IDE software Eg. Visual Studio Code&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is Git Bash?
&lt;/h3&gt;

&lt;p&gt;Git bash is a command-line application for windows that provides a Unix-like terminal environment specifically designed to work seamlessly with Git and GitHub software in development workflows. It allows developers to run commands to push, pull, commit or clone from or to a remote repositories such as GitHub.&lt;br&gt;
It acts as a local interface where developers manage their code (&lt;em&gt;code in your local machine&lt;/em&gt;) with remote repositories hosted on GitHub (&lt;em&gt;code stored on GitHub&lt;/em&gt;). &lt;/p&gt;

&lt;h3&gt;
  
  
  What is Git?
&lt;/h3&gt;

&lt;p&gt;Git is a version control system that is used to track changes in source code. &lt;br&gt;
Git is the underlying technology that runs locally on a developers computer, while GitHub is an online platform that hosts Git repositories and adds collaboration features such as push, pull and commit. &lt;br&gt;
Using this version control system(Git), developers can create repositories, make commits to record code changes, create branches to develop features independently, and merge those changes back into the main code-base. These Git-managed changes can then be pushed to or pulled from GitHub, allowing teams to share code, back it up remotely, and collaborate on projects in an organized, controlled manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is version control important?
&lt;/h3&gt;

&lt;p&gt;I am sure you've head or experienced incidents where a software is not working and the reason upon discovery was because someone added (pushed) new code to it and it was fixed by reverting back to the old version. Well, here is where version control comes in. With version control, we can be able to track and manage changes made to files. This allows us to track what was changed, when the change was made, who made the change and when needed revert back to an earlier version. Now we are able to achieve safe collaborations among multiple people working on the same project, prevent accidental loss of work and also revert back to previous version if mistakes or bugs are introduced.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Push code to GitHub
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Create a local repository (code on our local machine)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddhtin7iov58fjo7pwcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddhtin7iov58fjo7pwcd.png" alt=" " width="800" height="264"&gt;&lt;/a&gt;&lt;br&gt;
On your file explorer, create a new folder&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcm8o518tfi9o2yd22ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcm8o518tfi9o2yd22ol.png" alt=" " width="800" height="623"&gt;&lt;/a&gt;&lt;br&gt;
Launch you IDE software and navigate to the folder we've just created by clicking on File and selecting the open folder option and you will land in the folder&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsa7qc916zxdnq4zh74g3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsa7qc916zxdnq4zh74g3.png" alt=" " width="508" height="461"&gt;&lt;/a&gt;&lt;br&gt;
By hovering above the folder name, you will see a file icon. Click on it to create a file and save it as a .py&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e3m59cf0r2ttlsah4pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e3m59cf0r2ttlsah4pq.png" alt=" " width="800" height="364"&gt;&lt;/a&gt;&lt;br&gt;
Write some content into the file. With that now, we have a python code file on our local machine.&lt;/p&gt;

&lt;p&gt;Now we want to push this code to GitHub a remote repository.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuso7zu6qn01lya7d7cdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuso7zu6qn01lya7d7cdp.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;br&gt;
Navigate to your GitHub account, Repositories section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7c3ohmiqg912diyg0i3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7c3ohmiqg912diyg0i3.png" alt=" " width="800" height="355"&gt;&lt;/a&gt;&lt;br&gt;
Click the green button named New. It takes you to a creating a new repository, where we give the repository a name and then create it by clicking on the create repository button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F154t34xh12rg27o42men.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F154t34xh12rg27o42men.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
The repository is created and GitHub provides us with commands to use to push our code in the created repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feimxmev4yxq80j370pvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feimxmev4yxq80j370pvv.png" alt=" " width="800" height="277"&gt;&lt;/a&gt;&lt;br&gt;
Open Git and navigate to the folder we created. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ph280024f6y0xvsui3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ph280024f6y0xvsui3k.png" alt=" " width="800" height="348"&gt;&lt;/a&gt;&lt;br&gt;
Following the commands we received on git, we start with &lt;code&gt;git init&lt;/code&gt; to initialize the git on this local repository. Then we run &lt;code&gt;git add .&lt;/code&gt; to all files or &lt;code&gt;git add filename.ext&lt;/code&gt; to add a specific file to a staging area. Run &lt;code&gt;git status&lt;/code&gt; to check the status of your repository at this stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyogwq2cqj4r19m832s2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyogwq2cqj4r19m832s2i.png" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;br&gt;
Now run the next command which is &lt;code&gt;git commit -m "add a comment here"&lt;/code&gt; to commit your code. Then run the command &lt;code&gt;git branch -M main&lt;/code&gt; to set a branch name (by using the name main, we set out branch name to main). &lt;br&gt;
Then we run the command &lt;code&gt;git remote add origin "link from GitHub"&lt;/code&gt;  to connect our local repository to GitHub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq5ofqhs3vjz7jbyqqxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq5ofqhs3vjz7jbyqqxz.png" alt=" " width="800" height="178"&gt;&lt;/a&gt;&lt;br&gt;
We can then now push our code using the command &lt;code&gt;git push -u origin main&lt;/code&gt;. This will push our code to our GitHub repository main Branch&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z27vngqndls6byh5nx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z27vngqndls6byh5nx6.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;br&gt;
When we now check our repository on GitHub, the file that we just pushed, now appears and is on the branch main.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivss7boj3smrqgdhfbij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivss7boj3smrqgdhfbij.png" alt=" " width="800" height="136"&gt;&lt;/a&gt;&lt;br&gt;
By running the command &lt;code&gt;git log&lt;/code&gt; we can be able to see details about the commit we made. We can see a unique hash assigned to the file we committed, the author of the file, the date the file was committed and the comments provided during this commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Pull code from GitHub
&lt;/h3&gt;

&lt;p&gt;Now we have pushed code to GitHub. But how can we pull code from GitHub?&lt;/p&gt;

&lt;p&gt;Let's use a scenario where you have a friend, and you tell them about this cool project that you are working on and that you would like them to collaborate with you. How will you share the code that you have already written with them?&lt;br&gt;
Since you had already pushed your project to GitHub. &lt;br&gt;
&lt;strong&gt;Method 1: Clone&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp8rv7vjqfns5uchj0mt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp8rv7vjqfns5uchj0mt.png" alt=" " width="800" height="292"&gt;&lt;/a&gt;&lt;br&gt;
You will navigate to your GitHub account and into the project repository you pushed your code to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcrbvfhpdrtl88966x6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcrbvfhpdrtl88966x6b.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;br&gt;
Click the green Code button and a drop down will appear. Here, copy the HTTPS link provided and share it with your friend.&lt;/p&gt;

&lt;p&gt;Since this will the first time your friend is working on the project and they do not have any code, they will...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyawebc2kh5q7lpisoox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyawebc2kh5q7lpisoox.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;br&gt;
Use the link that you shared with them, and run the command &lt;code&gt;git clone "link"&lt;/code&gt;. This command will download the code and create a folder named after your repository on your friends local machine and then automatically set the remote origin as the repository on GitHub.&lt;br&gt;
Now your friend will have the same code as you and you can now collaborate on the project.&lt;/p&gt;

&lt;p&gt;When your friend makes changes on to the code and you would like to see them in your machine you would do the following:&lt;br&gt;
&lt;strong&gt;Method 2: Pull&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figupwjmntv1u05dc1k8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figupwjmntv1u05dc1k8q.png" alt=" " width="800" height="237"&gt;&lt;/a&gt;&lt;br&gt;
Using Git, navigate into the repository in your local machine. Then run the &lt;code&gt;git branch&lt;/code&gt; command to confirm that you are correct branch (one which changes were made to).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6t2zdcoa7k43lsfgzih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6t2zdcoa7k43lsfgzih.png" alt=" " width="800" height="233"&gt;&lt;/a&gt;&lt;br&gt;
After verifying that you are on the correct branch, run the command &lt;code&gt;git pull&lt;/code&gt; to fetch all updates made on the repository and merges them with your local repository. Or run the command &lt;code&gt;git pull origin "branch name"&lt;/code&gt; to only pull updates made to a specific branch for example a branch named development.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to track changes using Git
&lt;/h3&gt;

&lt;p&gt;Now we have you and your friend collaborating on a project. Meaning you will be working on the same code together. How will you now be able to track what changes were made?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjt14ea419uj1nlyj6t8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjt14ea419uj1nlyj6t8.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
Using the command &lt;code&gt;git log&lt;/code&gt; on, we can be able to track when commits were made, who made them, when they were made them and why they were made based on the commit message.&lt;/p&gt;

&lt;p&gt;Now lets say that after making changes to your code, you realize that the new piece of code you added is causing your software to fail. What do you do?&lt;br&gt;
Checking the log you can see the new commits made. On every commit, there is a number next to it. This number is called a hash and it's what we use to revert back our repository to a previous state before a commit was made.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm5g8rz51d1p915ryaoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm5g8rz51d1p915ryaoq.png" alt=" " width="800" height="448"&gt;&lt;/a&gt;&lt;br&gt;
At the moment, here is the state of our repository. But after the last commit was made, it's when we realized that our software has a bug or issues. Now we want to revert back to the previous commit that we are sure has no bugs or any issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz69y30rc7gtjl2rdwt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz69y30rc7gtjl2rdwt9.png" alt=" " width="800" height="109"&gt;&lt;/a&gt;&lt;br&gt;
Using the command &lt;code&gt;git revert "hash"&lt;/code&gt;, we revert our repository to the state at which a file we had created was not created yet. We also receive a message on git that informs that the file has been deleted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwzmr6c3sd8v4jrvdnie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwzmr6c3sd8v4jrvdnie.png" alt=" " width="800" height="572"&gt;&lt;/a&gt;&lt;br&gt;
Using our code editor, we accept the changes made by deleting the file that did exist at that point in time and then commit the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kcjbhdfqq6nytimxfvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kcjbhdfqq6nytimxfvp.png" alt=" " width="800" height="154"&gt;&lt;/a&gt;&lt;br&gt;
To update a remote repository on GitHub, you run the command &lt;code&gt;git push -u origin main&lt;/code&gt; which will make changes to existing repository so that they can be in sync with your local repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  Having done this now. You now understand what is git, how it used to push and pull code from remote repositories, how it enables collaboration and how it can be used to do version control.
&lt;/h4&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>versioncontrol</category>
      <category>pushandpull</category>
    </item>
  </channel>
</rss>
