<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Iniko</title>
    <description>The latest articles on Forem by Iniko (@iniko).</description>
    <link>https://forem.com/iniko</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/iniko"/>
    <language>en</language>
    <item>
      <title>AWS IoT Core Explained for DevOps &amp; Cloud Engineers</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Mon, 22 Dec 2025 05:45:53 +0000</pubDate>
      <link>https://forem.com/iniko/aws-iot-core-explained-for-devops-cloud-engineers-3285</link>
      <guid>https://forem.com/iniko/aws-iot-core-explained-for-devops-cloud-engineers-3285</guid>
      <description>&lt;p&gt;As IoT adoption grows rapidly, managing millions (or even billions) of connected devices securely and at scale becomes a major challenge. AWS IoT Core solves this problem by acting as a reliable bridge between devices and the cloud.&lt;br&gt;
In this post, let’s understand AWS IoT Core from a DevOps and cloud lifecycle perspective.&lt;br&gt;
🔹 What is AWS IoT Core?&lt;br&gt;
AWS IoT Core is a fully managed cloud service that allows IoT devices—such as sensors, smart appliances, vehicles, and industrial machines—to securely connect and exchange data with cloud applications and other devices.&lt;br&gt;
AWS handles the heavy lifting of authentication, message routing, scalability, and availability, so teams can focus on building solutions instead of managing infrastructure.&lt;br&gt;
✅ Best for: Connecting billions of IoT devices and processing trillions of messages reliably.&lt;br&gt;
🔹 Key Features&lt;br&gt;
🔐 Secure Device Communication&lt;br&gt;
Mutual authentication using X.509 certificates&lt;br&gt;
Encrypted communication with TLS&lt;br&gt;
📡 Message Broker&lt;br&gt;
Supports MQTT, MQTT over WebSockets, and HTTPS&lt;br&gt;
Designed for low latency and high throughput&lt;br&gt;
⚙️ Rules Engine&lt;br&gt;
Filter, transform, and route data to AWS services like:&lt;br&gt;
AWS Lambda&lt;br&gt;
Amazon S3&lt;br&gt;
DynamoDB&lt;br&gt;
Amazon Kinesis&lt;br&gt;
🪞 Device Shadow&lt;br&gt;
Maintains a virtual copy of device state&lt;br&gt;
Enables applications to interact with devices even when they are offline&lt;br&gt;
🔗 AWS Service Integration&lt;br&gt;
Works seamlessly with analytics, monitoring, and AI/ML services&lt;br&gt;
🔹 AWS Category / Cloud Domain&lt;br&gt;
Cloud Domain: Internet of Things (IoT), Cloud Computing&lt;br&gt;
AWS Category: IoT Services, Application Integration&lt;br&gt;
🔹 Pricing Model&lt;br&gt;
AWS IoT Core follows a pay-as-you-go pricing model:&lt;br&gt;
Charged per million messages&lt;br&gt;
Charges for device connectivity&lt;br&gt;
Pricing for Rules Engine executions&lt;br&gt;
Pricing for Device Shadow operations&lt;br&gt;
✔ No upfront cost&lt;br&gt;
✔ No minimum commitment&lt;br&gt;
✔ Scales automatically with usage&lt;br&gt;
🔹 Common Use Cases&lt;br&gt;
Smart Home Systems&lt;br&gt;
Industrial IoT (IIoT)&lt;br&gt;
Fleet &amp;amp; Asset Tracking&lt;br&gt;
Smart Cities&lt;br&gt;
Healthcare Monitoring&lt;br&gt;
Energy &amp;amp; Utilities Management&lt;br&gt;
🔹 Final Thoughts&lt;br&gt;
AWS IoT Core is a powerful and scalable service for building secure, cloud-connected IoT solutions. From smart homes to industrial automation, it fits naturally into modern DevOps and cloud architectures, enabling teams to process real-time data without worrying about infrastructure management.&lt;br&gt;
If you're building IoT systems on AWS, IoT Core is the foundation you should start with.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>cloud</category>
      <category>python</category>
      <category>aws</category>
    </item>
    <item>
      <title>DevSecOps Periodic Table — Tool “Fe”</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Mon, 22 Dec 2025 04:54:16 +0000</pubDate>
      <link>https://forem.com/iniko/devsecops-periodic-table-tool-fe-p0k</link>
      <guid>https://forem.com/iniko/devsecops-periodic-table-tool-fe-p0k</guid>
      <description>&lt;p&gt;🧠 Overview of SonarQube&lt;br&gt;
In the DevSecOps Periodic Table, SonarQube stands as a key security &amp;amp; quality tool that helps teams continuously inspect code quality and identify vulnerabilities early in the software delivery pipeline. It analyzes source code to detect bugs, code smells, and security flaws — giving developers actionable insights before code moves further downstream.�&lt;br&gt;
Digital.ai&lt;br&gt;
⭐ Key Features&lt;br&gt;
Static Code Analysis: Automatically scans code for bugs and vulnerabilities without executing it.&lt;br&gt;
Quality Gates: Lets teams enforce standards before merging or releasing code.&lt;br&gt;
Multi-Language Support: Works with languages such as Java, C#, JavaScript, Python, Go, and more.&lt;br&gt;
Security Reports &amp;amp; Trends: Offers dashboards showing issue trends over time.&lt;br&gt;
Integration with CI/CD: Embeds easily in pipelines (Jenkins, GitHub Actions, GitLab CI, etc.).&lt;br&gt;
These features make SonarQube indispensable in DevSecOps, where shifting left on security and code quality is essential.&lt;br&gt;
🔧 How It Fits into DevOps/DevSecOps&lt;br&gt;
SonarQube supports the DevSecOps philosophy of integrating security into development, not treating it as an afterthought. By performing early static analysis as part of build and CI/CD workflows, it reduces the risk of security defects and technical debt reaching production.&lt;br&gt;
In classic DevOps you focus on fast builds and deployments — but without built-in checks, quality and security can slip. SonarQube ensures that speed doesn’t come at the cost of safety. Its reports can block deployments if critical issues are found, enforcing security policies and quality standards automatically.&lt;br&gt;
🧾 Programming Language&lt;br&gt;
SonarQube is language-agnostic — it analyzes many programming languages rather than being tied to one. The analyzer itself is typically written in Java, but it supports plugins for virtually all major languages used in modern DevSecOps stacks.&lt;br&gt;
🏢 Parent Company&lt;br&gt;
SonarQube is developed by SonarSource, a Swiss company founded in 2008 focused on continuous code quality and security tools.&lt;br&gt;
💸 Open Source or Paid?&lt;br&gt;
Open Source: SonarQube has a free Community Edition that covers basic static analysis and quality checks.&lt;br&gt;
Paid (Commercial Editions): Developer, Enterprise, and Data Center Editions offer advanced rules, broader language support, deeper security testing (SAST), and better enterprise governance features.&lt;br&gt;
💡 Most teams start with the free version and upgrade as their DevSecOps maturity grows.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>career</category>
      <category>cloud</category>
      <category>developer</category>
    </item>
    <item>
      <title>Indexing, Hashing &amp; Query Optimization in DBMS</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Wed, 08 Oct 2025 09:37:15 +0000</pubDate>
      <link>https://forem.com/iniko/indexing-hashing-query-optimization-in-dbms-1id0</link>
      <guid>https://forem.com/iniko/indexing-hashing-query-optimization-in-dbms-1id0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxt0aqdul3c856yqcp4k.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxt0aqdul3c856yqcp4k.webp" alt=" " width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Databases don’t just store data — they must retrieve it efficiently. As table sizes grow into the thousands, millions, or more, naive queries (scanning every row) become too slow. That’s where indexing and hashing come in — they act like shortcuts into your data. Combined with smart query design, they dramatically speed up lookups.&lt;/p&gt;

&lt;p&gt;Let’s break down how these concepts work, when to use them, and what trade-offs to watch out for.&lt;/p&gt;

&lt;p&gt;What Is Indexing?&lt;/p&gt;

&lt;p&gt;An index is a data structure built over a table (or one or more columns) to help the database find rows faster. Rather than scanning every row, the DBMS can use the index to jump directly to relevant records.&lt;/p&gt;

&lt;p&gt;It’s analogous to the index at the back of a book — it doesn’t contain the full content, but pointers to where things live.&lt;/p&gt;

&lt;p&gt;Types of Indexes&lt;/p&gt;

&lt;p&gt;Primary Index: Automatically associated with the table’s primary key.&lt;/p&gt;

&lt;p&gt;Secondary (or Secondary) Index: Manually created index on a non-key column to speed queries.&lt;/p&gt;

&lt;p&gt;Clustering Index: Controls the physical order of data in the table (rows are stored in index order).&lt;/p&gt;

&lt;p&gt;Non-Clustering Index: Separate structure; the table’s rows aren’t physically stored in index order, but index entries point to them.&lt;/p&gt;

&lt;p&gt;Most relational databases use B-Tree or B+-Tree variants for indexes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19bh1ivosaz7yovadrbt.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19bh1ivosaz7yovadrbt.webp" alt=" " width="614" height="715"&gt;&lt;/a&gt;&lt;br&gt;
B-Tree: Internal nodes and leaf nodes both can store keys and pointers.&lt;/p&gt;

&lt;p&gt;B+-Tree: Internal nodes only store keys (no direct data pointers); actual data pointers (or record references) live in the leaf nodes. Leaf nodes are often linked to each other, improving range scans (you can traverse leaf nodes sequentially).&lt;/p&gt;

&lt;p&gt;Because of this design, B+-Tree indexes are especially good for range queries (e.g. BETWEEN, &amp;lt;, &amp;gt;) and sorted access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cdcx9v2sv37hk1vzqou.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cdcx9v2sv37hk1vzqou.webp" alt=" " width="800" height="656"&gt;&lt;/a&gt;&lt;br&gt;
Hash Indexing (or Hash-Based Indexing)&lt;/p&gt;

&lt;p&gt;A hash index uses a hash function: a key is passed through the hash function, and it maps to a bucket (or slot). Records with that key go into that bucket (or chain). Lookups compute the hash and go directly to the appropriate bucket.&lt;/p&gt;

&lt;p&gt;Strength: Excellent for equality searches (column = value).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfjo5ylt0x8u572180zw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfjo5ylt0x8u572180zw.webp" alt=" " width="800" height="644"&gt;&lt;/a&gt;&lt;br&gt;
Weakness: Poor for range queries or sorted operations (you can’t easily traverse “next bucket” in order).&lt;/p&gt;

&lt;p&gt;So when your queries often ask “find rows matching this exact key?”, a hash index shines. But for queries like “find rows between A and B”, a B+-Tree style index is far better.&lt;/p&gt;

&lt;p&gt;Examples: Students Table&lt;/p&gt;

&lt;p&gt;Suppose you have:&lt;/p&gt;

&lt;p&gt;CREATE TABLE Students (&lt;br&gt;
  roll_no INT PRIMARY KEY,&lt;br&gt;
  name VARCHAR(100),&lt;br&gt;
  age INT,&lt;br&gt;
  grade CHAR(1)&lt;br&gt;
);&lt;/p&gt;

&lt;p&gt;You might do:&lt;/p&gt;

&lt;p&gt;-- Default B-Tree index (often automatic for primary key)&lt;br&gt;
CREATE INDEX idx_roll_btree ON Students (roll_no);&lt;/p&gt;

&lt;p&gt;-- If your database supports hash indexing:&lt;br&gt;
CREATE INDEX idx_roll_hash ON Students USING HASH (roll_no);&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqvqsgdxm9l757k9rhvi.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqvqsgdxm9l757k9rhvi.webp" alt=" " width="800" height="667"&gt;&lt;/a&gt;&lt;br&gt;
Then:&lt;/p&gt;

&lt;p&gt;SELECT * FROM Students WHERE roll_no = 50;&lt;br&gt;
→ both B-Tree and hash index can perform well, but hash may be faster for exact match.&lt;/p&gt;

&lt;p&gt;SELECT * FROM Students WHERE roll_no BETWEEN 10 AND 100;&lt;br&gt;
→ B-Tree / B+-Tree index is much better suited, because entries are stored in sorted order.&lt;/p&gt;

&lt;p&gt;Choosing Between B-Tree and Hash Index&lt;br&gt;
Query Type  Best Choice Reason&lt;br&gt;
Exact match (=) Hash or B-Tree  Hash is optimal for equality lookups&lt;br&gt;
Range queries (BETWEEN, &amp;lt;, &amp;gt;)   B-Tree / B+-Tree    Maintains sorted order for efficient traversal&lt;br&gt;
Sequential scans / ordered output   B+-Tree Leaf nodes are linked, enabling fast in-order scanning&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2yfo7dhxez7qm2qs3za.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2yfo7dhxez7qm2qs3za.webp" alt=" " width="800" height="670"&gt;&lt;/a&gt;&lt;br&gt;
Other Considerations / Trade-Offs&lt;/p&gt;

&lt;p&gt;Storage Overhead: Every index consumes additional disk/memory space.&lt;/p&gt;

&lt;p&gt;Insert / Update / Delete Cost: More indexes = slower writes, because you must update all relevant indexes.&lt;/p&gt;

&lt;p&gt;Low Cardinality Columns: Don’t index columns with very few distinct values (e.g. gender, status) — the benefit is minimal.&lt;/p&gt;

&lt;p&gt;Index Fragmentation / Maintenance: Over time, indexes may become less efficient and need rebuilding, reorganization, or defragmentation.&lt;/p&gt;

&lt;p&gt;Query Optimization &amp;amp; Index Strategy&lt;/p&gt;

&lt;p&gt;Indexing is one of the most powerful levers for speeding up queries, but it must go hand-in-hand with how you write your SQL and how the database executes it.&lt;/p&gt;

&lt;p&gt;Use the EXPLAIN (or EXPLAIN ANALYZE / EXPLAIN PLAN) command to see how your database is executing queries. It shows whether an index is being used, how many rows are being scanned, and more.&lt;/p&gt;

&lt;p&gt;Avoid SELECT * — fetch only the columns you need (less I/O).&lt;/p&gt;

&lt;p&gt;Use composite indexes (multi-column) when your queries often filter on multiple columns together (e.g. WHERE col1 = X AND col2 = Y).&lt;/p&gt;

&lt;p&gt;Keep your indexes lean and tuned. Don’t over-index — too many indexes slows down writes and consumes space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dscmxbqeduoyw8581z9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dscmxbqeduoyw8581z9.webp" alt=" " width="800" height="702"&gt;&lt;/a&gt;&lt;br&gt;
Final Thoughts&lt;/p&gt;

&lt;p&gt;Indexing and hashing are essential tools for building performant database systems. They are not silver bullets but, when used smartly, can transform a slow system into a responsive one.&lt;/p&gt;

&lt;p&gt;Start by analyzing your most frequent, slowest queries. Add or adjust indexes based on patterns. Measure impact with EXPLAIN and query profiling. Over time you'll develop instincts for which columns — and which kinds of indexing (B-Tree, hash, composite) — will yield the best performance.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>architecture</category>
      <category>database</category>
    </item>
    <item>
      <title>Transactions, Deadlocks &amp; Log-Based Recovery in DBMS</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Wed, 08 Oct 2025 09:34:16 +0000</pubDate>
      <link>https://forem.com/iniko/transactions-deadlocks-log-based-recovery-in-dbms-3jok</link>
      <guid>https://forem.com/iniko/transactions-deadlocks-log-based-recovery-in-dbms-3jok</guid>
      <description>&lt;p&gt;Transactions, Deadlocks &amp;amp; Log-Based Recovery in DBMS&lt;/p&gt;

&lt;p&gt;When working with databases, you must ensure that operations remain safe, consistent, and recoverable. In this article, I’ll explain:&lt;/p&gt;

&lt;p&gt;What a transaction is, and how atomicity and rollback work&lt;/p&gt;

&lt;p&gt;How deadlocks happen and how to handle them&lt;/p&gt;

&lt;p&gt;What log-based recovery means and how it ensures durability&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsz9o3fat7mdmqctp1vc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsz9o3fat7mdmqctp1vc.webp" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Transactions: Atomicity &amp;amp; Rollback&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A transaction is a unit of work comprising one or more operations (such as UPDATE, INSERT, DELETE). The key property is atomicity — the transaction must complete entirely or not at all.&lt;/p&gt;

&lt;p&gt;If something inside the transaction fails (due to constraint violation, runtime error, etc.), none of the changes should persist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcth1p3f73l19x7hapbz.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcth1p3f73l19x7hapbz.webp" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;br&gt;
The database can rollback the transaction, undoing all partial changes.&lt;/p&gt;

&lt;p&gt;Example Workflow&lt;/p&gt;

&lt;p&gt;Start transaction&lt;/p&gt;

&lt;p&gt;Run some updates (e.g. deduct money, add to another account)&lt;/p&gt;

&lt;p&gt;If all succeed → commit&lt;/p&gt;

&lt;p&gt;If any step fails → rollback&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4sjd5wmaycuju058ydx.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4sjd5wmaycuju058ydx.webp" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;br&gt;
This ensures that you never end up in some halfway state with only a few operations applied.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deadlocks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A deadlock occurs when two (or more) transactions hold locks on resources that the other needs, and neither can proceed or release its lock. Each waits for the other — a cycle of waiting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pu9995z12jb5ijtu1lj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pu9995z12jb5ijtu1lj.webp" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;br&gt;
Deadlock example scenario&lt;/p&gt;

&lt;p&gt;Transaction T1 locks row A, then requests a lock on row B&lt;/p&gt;

&lt;p&gt;Transaction T2 locks row B, then requests a lock on row A&lt;/p&gt;

&lt;p&gt;They both wait indefinitely for each other → deadlock&lt;/p&gt;

&lt;p&gt;In many DBMS (like MySQL, PostgreSQL), a deadlock is detected automatically. The system will abort one of the transactions with an error like:&lt;/p&gt;

&lt;p&gt;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68pkq6bywqaigudd5m1c.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68pkq6bywqaigudd5m1c.webp" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To handle deadlocks:&lt;/p&gt;

&lt;p&gt;Use shorter transactions (keep locks held for minimal time)&lt;/p&gt;

&lt;p&gt;Access objects in a consistent order in different transactions&lt;/p&gt;

&lt;p&gt;Retry aborted transactions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lrf2ua43bpxfm7l6pt6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lrf2ua43bpxfm7l6pt6.webp" alt=" " width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log-Based Recovery&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To ensure durability and recoverability after crashes or failures, modern DBMS maintain transaction logs (also called write-ahead logs). These logs record each change (before it’s applied) so the system can:&lt;/p&gt;

&lt;p&gt;Undo incomplete or uncommitted changes&lt;/p&gt;

&lt;p&gt;Redo committed changes during recovery&lt;/p&gt;

&lt;p&gt;How it works (simplified)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t1gj8g625o4jmdwkzx5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t1gj8g625o4jmdwkzx5.webp" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Before modifying data pages, write a record of the intended change to the log.&lt;/p&gt;

&lt;p&gt;Apply the change to the data pages.&lt;/p&gt;

&lt;p&gt;On commit, mark in the log that the transaction is committed.&lt;/p&gt;

&lt;p&gt;On recovery (after crash), scan the log:&lt;/p&gt;

&lt;p&gt;Undo all uncommitted transactions&lt;/p&gt;

&lt;p&gt;Redo the committed ones&lt;/p&gt;

&lt;p&gt;This ensures that even if the system crashes after a commit, the change is persisted. Likewise, half-done transactions are rolled back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwgtxuqvbvrqsmolnr39.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwgtxuqvbvrqsmolnr39.webp" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;br&gt;
Summary &amp;amp; Takeaways&lt;/p&gt;

&lt;p&gt;A transaction groups multiple operations into a single unit: either it all succeeds or it all fails (atomicity).&lt;/p&gt;

&lt;p&gt;Deadlocks arise when transactions block each other; DBMS detect them and abort one transaction, but application logic should minimize them.&lt;/p&gt;

&lt;p&gt;Log-based recovery (using a transaction log) is vital for ensuring durability and allowing rollback or redo during system recovery.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>architecture</category>
      <category>database</category>
      <category>design</category>
    </item>
    <item>
      <title>ACID Properties in DBMS</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Wed, 08 Oct 2025 09:27:57 +0000</pubDate>
      <link>https://forem.com/iniko/acid-properties-in-dbms-flp</link>
      <guid>https://forem.com/iniko/acid-properties-in-dbms-flp</guid>
      <description>&lt;p&gt;ACID Properties in DBMS: Ensuring Reliable Database Transactions&lt;/p&gt;

&lt;p&gt;Databases are the backbone of nearly every application—from banking software to e-commerce platforms. When multiple operations happen concurrently or failures occur, how do we maintain correct, reliable data? That’s where the ACID properties come in: Atomicity, Consistency, Isolation, and Durability.&lt;/p&gt;

&lt;p&gt;What is ACID?&lt;/p&gt;

&lt;p&gt;ACID is a set of properties that guarantee that database transactions are processed reliably, even in the presence of concurrency, crashes, or errors. Let’s go through each:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Atomicity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A transaction must either complete in full or not at all.&lt;br&gt;
If any part of the transaction fails, the database rolls back to its prior state, ensuring partial changes aren’t committed.&lt;/p&gt;

&lt;p&gt;Example: Transferring money between two accounts&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9svopnelu3ygpd3xcao5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9svopnelu3ygpd3xcao5.webp" alt=" " width="800" height="509"&gt;&lt;/a&gt;&lt;br&gt;
Debit from Account A&lt;/p&gt;

&lt;p&gt;Credit to Account B&lt;br&gt;
If the debit succeeds but the credit fails, the entire operation is rolled back so that no money is lost or duplicated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v7436sy8rap7flb0qnn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v7436sy8rap7flb0qnn.webp" alt=" " width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Consistency&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The database must move from one valid state to another, preserving all defined rules, constraints, triggers, etc.&lt;br&gt;
After a transaction, all integrity rules must still be valid.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70pqrabf2wtv17s9b0di.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70pqrabf2wtv17s9b0di.webp" alt=" " width="800" height="471"&gt;&lt;/a&gt;&lt;br&gt;
Example: If there’s a rule that a student's age must be ≥ 18, a transaction that attempts to insert a student with age 16 should not succeed. The transaction must either be fully valid or rejected.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Isolation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Concurrent transactions should not interfere with each other’s intermediate steps.&lt;br&gt;
One transaction should not see the partial, uncommitted results of another.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrlnqy85t2j2vmhqpng1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrlnqy85t2j2vmhqpng1.webp" alt=" " width="800" height="495"&gt;&lt;/a&gt;&lt;br&gt;
Problems prevented:&lt;/p&gt;

&lt;p&gt;Dirty reads: reading data from another transaction before it's committed&lt;/p&gt;

&lt;p&gt;Non-repeatable reads: data changes between reads in the same transaction&lt;/p&gt;

&lt;p&gt;Phantom reads: new rows appear in subsequent reads&lt;/p&gt;

&lt;p&gt;Different isolation levels (e.g. Read Uncommitted, Read Committed, Repeatable Read, Serializable) help manage tradeoffs between performance and strictness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdotajr168xldewxg8vi.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdotajr168xldewxg8vi.webp" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Durability&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once a transaction commits, its changes must persist—even if the system crashes right after.&lt;br&gt;
This means they should be stored in non-volatile memory (e.g. on disk), with logging, backups, and recovery mechanisms ensuring durability.&lt;/p&gt;

&lt;p&gt;How It All Fits Together (Simplified Example)&lt;/p&gt;

&lt;p&gt;Imagine you have a Loans table in your database. You want to update loan records within a transaction:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmlclqb4ez306h3yfa90.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmlclqb4ez306h3yfa90.webp" alt=" " width="800" height="543"&gt;&lt;/a&gt;&lt;br&gt;
Atomicity: If updating one loan fails, revert all updates done in the transaction.&lt;/p&gt;

&lt;p&gt;Consistency: Ensure that the updated interest rate falls within allowed business bounds or constraints.&lt;/p&gt;

&lt;p&gt;Isolation: Prevent other concurrent transactions from interfering (e.g. they shouldn’t see half-updated data).&lt;/p&gt;

&lt;p&gt;Durability: After the commit, the new values survive system crashes, reboots, or hardware failures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg35458w3mqhkihokdcd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg35458w3mqhkihokdcd.webp" alt=" " width="800" height="558"&gt;&lt;/a&gt;&lt;br&gt;
Why ACID Matters&lt;/p&gt;

&lt;p&gt;Data integrity under concurrency: Multiple users or processes can act on data without corrupting state.&lt;/p&gt;

&lt;p&gt;Fault tolerance: Partial failures don’t leave the system in an inconsistent state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hva4p1qydlrn36t86tk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hva4p1qydlrn36t86tk.webp" alt=" " width="800" height="549"&gt;&lt;/a&gt;&lt;br&gt;
Trust: For systems handling critical data (finance, medical records, etc.), ACID properties are foundational to reliability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4u4z30jfm3v7wihikap.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4u4z30jfm3v7wihikap.webp" alt=" " width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y59n9qpbw5s2l7cbbvd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y59n9qpbw5s2l7cbbvd.webp" alt=" " width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v9621qxb1q12t1ov799.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v9621qxb1q12t1ov799.webp" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>architecture</category>
      <category>database</category>
      <category>learning</category>
    </item>
    <item>
      <title>SQL Cursors &amp; Triggers</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Wed, 08 Oct 2025 07:16:17 +0000</pubDate>
      <link>https://forem.com/iniko/sql-cursors-triggers-1d7f</link>
      <guid>https://forem.com/iniko/sql-cursors-triggers-1d7f</guid>
      <description>&lt;p&gt;SQL Cursors &amp;amp; Triggers: What They Are and When to Use Them&lt;/p&gt;

&lt;p&gt;In SQL, you often run set-based operations (e.g. SELECT, UPDATE, DELETE) over many rows at once. But what if you need to treat each row individually? Or automatically react to changes in your data? That’s where Cursors and Triggers enter the picture.&lt;/p&gt;

&lt;p&gt;Let’s dive into each, see examples, and explore when to use one over the other.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cursors: Row-by-Row Processing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A cursor is like a pointer that lets you step through the result set of a query, one row at a time. In scenarios where you need procedural logic or conditional steps per row, a cursor can be very useful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhp894sbbifor5tbcpg3i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhp894sbbifor5tbcpg3i.webp" alt=" " width="800" height="617"&gt;&lt;/a&gt;&lt;br&gt;
Example&lt;/p&gt;

&lt;p&gt;Suppose you have a table of employees and you want to process those whose salary is greater than 50,000:&lt;/p&gt;

&lt;p&gt;DECLARE @empId INT, @salary DECIMAL(10,2)&lt;/p&gt;

&lt;p&gt;DECLARE emp_cursor CURSOR FOR&lt;br&gt;
  SELECT EmployeeID, Salary&lt;br&gt;
  FROM Employees&lt;br&gt;
  WHERE Salary &amp;gt; 50000&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0di1fe53tubyakafr1rj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0di1fe53tubyakafr1rj.webp" alt=" " width="800" height="638"&gt;&lt;/a&gt;&lt;br&gt;
OPEN emp_cursor;&lt;/p&gt;

&lt;p&gt;FETCH NEXT FROM emp_cursor INTO @empId, @salary;&lt;/p&gt;

&lt;p&gt;WHILE @@FETCH_STATUS = 0&lt;br&gt;
BEGIN&lt;br&gt;
  -- Example logic: give a bonus or update something&lt;br&gt;
  PRINT 'Employee ' + CAST(@empId AS VARCHAR) + ' has salary ' + CAST(@salary AS VARCHAR)&lt;/p&gt;

&lt;p&gt;FETCH NEXT FROM emp_cursor INTO @empId, @salary;&lt;br&gt;
END&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3lsm4vt0jciep3ilcno.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3lsm4vt0jciep3ilcno.webp" alt=" " width="800" height="626"&gt;&lt;/a&gt;&lt;br&gt;
CLOSE emp_cursor;&lt;br&gt;
DEALLOCATE emp_cursor;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4e3lez7nh3rcvrrr75a.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4e3lez7nh3rcvrrr75a.webp" alt=" " width="800" height="631"&gt;&lt;/a&gt;&lt;br&gt;
Here, the cursor:&lt;/p&gt;

&lt;p&gt;Declares which result set to iterate over.&lt;/p&gt;

&lt;p&gt;Opens it.&lt;/p&gt;

&lt;p&gt;Fetches rows one by one.&lt;/p&gt;

&lt;p&gt;Lets you run custom logic per row.&lt;/p&gt;

&lt;p&gt;Finally closes and deallocates the resource.&lt;/p&gt;

&lt;p&gt;When to Use Cursors&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9p3wktq8frnza1hlufj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9p3wktq8frnza1hlufj.webp" alt=" " width="800" height="646"&gt;&lt;/a&gt;&lt;br&gt;
Use a cursor when:&lt;/p&gt;

&lt;p&gt;You need to perform row-level operations that can’t be done easily in a set operation.&lt;/p&gt;

&lt;p&gt;Your logic depends on the value from the previous row, or you need sequential access.&lt;/p&gt;

&lt;p&gt;You must mix complex procedural logic with SQL data.&lt;/p&gt;

&lt;p&gt;However, be cautious: cursors can be slower and more resource-intensive compared to pure set-based queries, especially on large datasets.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Triggers: Automate Actions on Data Changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A trigger is a database object that defines automatic behavior when certain events occur — e.g. INSERT, UPDATE, or DELETE. Once you set it up, it “listens” for those events and runs code accordingly.&lt;/p&gt;

&lt;p&gt;Example: AFTER INSERT Trigger for Auditing&lt;/p&gt;

&lt;p&gt;Let’s say you want to audit insertions into a Students table by copying new rows into a separate Student_Audit table:&lt;/p&gt;

&lt;p&gt;-- Audit table&lt;br&gt;
CREATE TABLE Student_Audit (&lt;br&gt;
  StudentID INT,&lt;br&gt;
  Name VARCHAR(100),&lt;br&gt;
  InsertedOn DATETIME DEFAULT GETDATE()&lt;br&gt;
);&lt;/p&gt;

&lt;p&gt;-- Trigger definition&lt;br&gt;
CREATE TRIGGER trg_AfterInsert_Students&lt;br&gt;
ON Students&lt;br&gt;
AFTER INSERT&lt;br&gt;
AS&lt;br&gt;
BEGIN&lt;br&gt;
  INSERT INTO Student_Audit (StudentID, Name)&lt;br&gt;
  SELECT i.StudentID, i.Name&lt;br&gt;
  FROM inserted AS i;&lt;br&gt;
END;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbhkx00co48slv55wkv2.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbhkx00co48slv55wkv2.webp" alt=" " width="800" height="624"&gt;&lt;/a&gt;&lt;br&gt;
Now, whenever a new student record is inserted into Students, the trigger will fire automatically and log the inserted data into Student_Audit.&lt;/p&gt;

&lt;p&gt;When to Use Triggers&lt;/p&gt;

&lt;p&gt;Triggers are suited when:&lt;/p&gt;

&lt;p&gt;You want automatic logging, auditing, or enforcing rules at the database layer.&lt;/p&gt;

&lt;p&gt;You want to centralize certain logic so that it can’t be bypassed (e.g., always enforce referential integrity or soft deletes).&lt;/p&gt;

&lt;p&gt;You need to react to changes in data without requiring the application to do so.&lt;/p&gt;

&lt;p&gt;Be careful, though: triggers can introduce complexity and unexpected side-effects if not managed carefully (e.g., cascading triggers, nested triggers, or impacting performance).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg0qtyphcsg0du3fho2r.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg0qtyphcsg0du3fho2r.webp" alt=" " width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Summary &amp;amp; Best Practices
Feature Purpose Use Case Example
Cursor  Process rows one at a time  Complex per-row logic, sequential ops
Trigger Automate reaction to data changes   Auditing, enforcing business rules&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Key takeaways:&lt;/p&gt;

&lt;p&gt;Use cursors when you absolutely need fine-grained control on each record.&lt;/p&gt;

&lt;p&gt;Use triggers when you want automatic reactions at the database level.&lt;/p&gt;

&lt;p&gt;Prefer set-based SQL operations when possible, for performance and readability.&lt;/p&gt;

&lt;p&gt;Monitor and test performance, especially for triggers or cursor loops on large tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu75papl7v13mfj5htuj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu75papl7v13mfj5htuj.webp" alt=" " width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>database</category>
      <category>learning</category>
      <category>backend</category>
    </item>
    <item>
      <title>Database Normalization Made Simple: From 1NF to 3NF</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Wed, 08 Oct 2025 03:33:25 +0000</pubDate>
      <link>https://forem.com/iniko/database-normalization-made-simple-from-1nf-to-3nf-4hab</link>
      <guid>https://forem.com/iniko/database-normalization-made-simple-from-1nf-to-3nf-4hab</guid>
      <description>&lt;p&gt;🧠 Database Normalization Made Simple: From 1NF to 3NF&lt;/p&gt;

&lt;p&gt;If you’ve ever managed a database where updating one piece of information breaks everything — congratulations, you’ve experienced data anomalies.&lt;br&gt;
The solution? Normalization — a systematic way to structure your database so it’s efficient, consistent, and free from redundancy.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through the journey from 1NF → 2NF → 3NF, breaking down what each normal form means and why it matters.&lt;/p&gt;

&lt;p&gt;💡 What Is Database Normalization?&lt;/p&gt;

&lt;p&gt;Normalization is the process of organizing data in a relational database to minimize redundancy and dependency issues.&lt;br&gt;
In simpler terms, it ensures your data is clean and consistent.&lt;/p&gt;

&lt;p&gt;Why normalize?&lt;/p&gt;

&lt;p&gt;✅ Avoid duplicate data&lt;/p&gt;

&lt;p&gt;✅ Prevent update, insert, and delete anomalies&lt;/p&gt;

&lt;p&gt;✅ Maintain data integrity&lt;/p&gt;

&lt;p&gt;✅ Make querying and maintenance easier&lt;/p&gt;

&lt;p&gt;⚠️ The Problem With Unnormalized Data&lt;/p&gt;

&lt;p&gt;Imagine you start with one giant table storing students, their courses, and instructors — all in the same place:&lt;/p&gt;

&lt;p&gt;StudentID   StudentName Course  Instructor  Phone&lt;br&gt;
1   Alice   DBMS    Prof. Raj   555-1001&lt;br&gt;
2   Bob DBMS    Prof. Raj   555-1001&lt;br&gt;
1   Alice   Networks    Prof. Kim   555-2002&lt;/p&gt;

&lt;p&gt;At first, it looks fine. But soon, issues appear:&lt;/p&gt;

&lt;p&gt;Insertion anomaly: You can’t add a new student until they enroll in a course.&lt;/p&gt;

&lt;p&gt;Update anomaly: Changing Prof. Raj’s phone number means updating it in multiple rows.&lt;/p&gt;

&lt;p&gt;Deletion anomaly: Removing Alice’s last course also deletes all her info.&lt;/p&gt;

&lt;p&gt;Let’s fix this, step by step.&lt;/p&gt;

&lt;p&gt;🧩 First Normal Form (1NF): Atomic Data&lt;/p&gt;

&lt;p&gt;Rule: Each cell should hold only one value — no lists, no repeating groups.&lt;/p&gt;

&lt;p&gt;Before 1NF:&lt;br&gt;
Student Courses&lt;br&gt;
Alice   DBMS, Networks&lt;br&gt;
After 1NF:&lt;br&gt;
Student Course&lt;br&gt;
Alice   DBMS&lt;br&gt;
Alice   Networks&lt;/p&gt;

&lt;p&gt;Now every value is atomic, and rows are unique. Congratulations, you’re in 1NF!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6kpzgukacf7ge4mghbp.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6kpzgukacf7ge4mghbp.webp" alt=" " width="800" height="621"&gt;&lt;/a&gt;&lt;br&gt;
⚙️ Second Normal Form (2NF): Full Dependency on the Key&lt;/p&gt;

&lt;p&gt;A table is in 2NF if:&lt;/p&gt;

&lt;p&gt;It’s already in 1NF, and&lt;/p&gt;

&lt;p&gt;Every non-key attribute depends on the whole primary key, not just part of it.&lt;/p&gt;

&lt;p&gt;This matters when your table has a composite key (like StudentID + CourseID).&lt;/p&gt;

&lt;p&gt;If StudentName depends only on StudentID, but not on CourseID, that’s a partial dependency — and it breaks 2NF.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrpxuaptfar60qdq9820.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrpxuaptfar60qdq9820.webp" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
Solution:&lt;/p&gt;

&lt;p&gt;Split the data into separate tables:&lt;/p&gt;

&lt;p&gt;Students → (StudentID, StudentName)&lt;/p&gt;

&lt;p&gt;Courses → (CourseID, CourseName)&lt;/p&gt;

&lt;p&gt;Enrollments → (StudentID, CourseID)&lt;/p&gt;

&lt;p&gt;Now, every non-key attribute depends on its full key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhi9mwu6q2n526xwfaty0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhi9mwu6q2n526xwfaty0.webp" alt=" " width="800" height="588"&gt;&lt;/a&gt;&lt;br&gt;
🔁 Third Normal Form (3NF): No Transitive Dependencies&lt;/p&gt;

&lt;p&gt;A table is in 3NF if:&lt;/p&gt;

&lt;p&gt;It’s already in 2NF, and&lt;/p&gt;

&lt;p&gt;Non-key attributes depend only on the key, not on other non-key attributes.&lt;/p&gt;

&lt;p&gt;For example, suppose your Courses table looks like this:&lt;/p&gt;

&lt;p&gt;CourseID    CourseName  Instructor  Phone&lt;/p&gt;

&lt;p&gt;Here, Phone depends on Instructor, not directly on CourseID. That’s a transitive dependency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltvrblkmef0chz58f3i4.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltvrblkmef0chz58f3i4.webp" alt=" " width="800" height="622"&gt;&lt;/a&gt;&lt;br&gt;
Fix:&lt;/p&gt;

&lt;p&gt;Break it into two tables:&lt;/p&gt;

&lt;p&gt;Courses → (CourseID, CourseName, InstructorID)&lt;/p&gt;

&lt;p&gt;Instructors → (InstructorID, InstructorName, Phone)&lt;/p&gt;

&lt;p&gt;Now your database is in 3NF, clean and dependency-free.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0tminibcnej40couplk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0tminibcnej40couplk.webp" alt=" " width="800" height="587"&gt;&lt;/a&gt;&lt;br&gt;
🏗️ Final Normalized Design (Up to 3NF)&lt;/p&gt;

&lt;p&gt;Here’s what the structure looks like now:&lt;/p&gt;

&lt;p&gt;Table   Columns&lt;br&gt;
Students    StudentID, StudentName&lt;br&gt;
Courses CourseID, CourseName, InstructorID&lt;br&gt;
Instructors InstructorID, InstructorName, Phone&lt;br&gt;
StudentCourses  StudentID, CourseID&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3k3dct3jyeudr8ocroo.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3k3dct3jyeudr8ocroo.webp" alt=" " width="800" height="623"&gt;&lt;/a&gt;&lt;br&gt;
✅ No redundant data&lt;br&gt;
✅ No update anomalies&lt;br&gt;
✅ Easy to query and maintain&lt;/p&gt;

&lt;p&gt;🧭 Key Takeaways&lt;/p&gt;

&lt;p&gt;1NF: Make data atomic — no repeating groups.&lt;/p&gt;

&lt;p&gt;2NF: Remove partial dependencies on composite keys.&lt;/p&gt;

&lt;p&gt;3NF: Remove transitive dependencies between non-key attributes.&lt;/p&gt;

&lt;p&gt;Bonus: Real-world systems sometimes denormalize for performance, but always start with a normalized design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuw0jfza9155irc0oynv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuw0jfza9155irc0oynv.webp" alt=" " width="800" height="628"&gt;&lt;/a&gt;&lt;br&gt;
🧱 Wrapping Up&lt;/p&gt;

&lt;p&gt;Normalization might sound theoretical, but it’s one of the most practical ways to make your databases scalable and reliable.&lt;br&gt;
Whether you’re designing your first app or refactoring legacy data, 1NF → 3NF is your foundation for a healthy schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftba4qiyg3dybzl6zqsq1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftba4qiyg3dybzl6zqsq1.webp" alt=" " width="800" height="648"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oaka7aqr14sbjwzt3w4.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oaka7aqr14sbjwzt3w4.webp" alt=" " width="800" height="623"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoxz102ea1dlth1k1tgt.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoxz102ea1dlth1k1tgt.webp" alt=" " width="800" height="591"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk8r9wy9ghv1tqfv96ew.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk8r9wy9ghv1tqfv96ew.webp" alt=" " width="800" height="595"&gt;&lt;/a&gt;&lt;br&gt;
If you found this helpful, share it with someone who’s still fighting messy tables!&lt;br&gt;
Or drop a comment if you’d like a follow-up on BCNF and higher normal forms.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>database</category>
    </item>
    <item>
      <title>CRUD Operations in MongoDB for Beginners</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Tue, 07 Oct 2025 08:41:45 +0000</pubDate>
      <link>https://forem.com/iniko/crud-operations-in-mongodb-for-beginners-58h9</link>
      <guid>https://forem.com/iniko/crud-operations-in-mongodb-for-beginners-58h9</guid>
      <description>&lt;p&gt;CRUD Operations in MongoDB for Beginners&lt;/p&gt;

&lt;h1&gt;
  
  
  mongodb #database #tutorial #beginners
&lt;/h1&gt;

&lt;p&gt;MongoDB is a widely used NoSQL database that offers flexibility and scalability for modern applications. Unlike traditional SQL databases, MongoDB stores data as JSON-like documents, making it perfect for handling dynamic and complex data structures.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore the CRUD operations — Create, Read, Update, and Delete — in MongoDB through a simple student database example. You’ll learn how to insert, retrieve, modify, and remove records while getting hands-on with MongoDB Atlas, the cloud-based version of MongoDB.&lt;/p&gt;

&lt;p&gt;🎯 Learning Outcomes&lt;/p&gt;

&lt;p&gt;By the end of this tutorial, you’ll understand how to:&lt;/p&gt;

&lt;p&gt;Insert one or multiple documents into a collection&lt;/p&gt;

&lt;p&gt;Retrieve and filter data using queries&lt;/p&gt;

&lt;p&gt;Update single or multiple documents&lt;/p&gt;

&lt;p&gt;Delete documents based on specific conditions&lt;/p&gt;

&lt;p&gt;Apply CRUD logic in real-world development&lt;/p&gt;

&lt;p&gt;⚙ Setup: Creating Your MongoDB Cluster&lt;/p&gt;

&lt;p&gt;Create a free account on MongoDB Atlas&lt;/p&gt;

&lt;p&gt;Set up a cluster and a database named collegeDB&lt;/p&gt;

&lt;p&gt;Inside it, create a collection called STUDENTS&lt;/p&gt;

&lt;p&gt;Once your setup is ready, let’s dive into the CRUD operations.&lt;/p&gt;

&lt;p&gt;🟢 CREATE (INSERT)&lt;/p&gt;

&lt;p&gt;We’ll begin by inserting student data into our STUDENTS collection. Each record represents a student as a separate document.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "student_id": "S001",&lt;br&gt;
  "name": "Harsh",&lt;br&gt;
  "age": 20,&lt;br&gt;
  "department": "CSBS",&lt;br&gt;
  "year": 2,&lt;br&gt;
  "cgpa": 9&lt;br&gt;
}&lt;br&gt;
{&lt;br&gt;
  "student_id": "S002",&lt;br&gt;
  "name": "ini",&lt;br&gt;
  "age": 21,&lt;br&gt;
  "department": "CSBS",&lt;br&gt;
  "year": 3,&lt;br&gt;
  "cgpa": 8.5&lt;br&gt;
}&lt;br&gt;
{&lt;br&gt;
  "student_id": "S003",&lt;br&gt;
  "name": "madh",&lt;br&gt;
  "age": 20,&lt;br&gt;
  "department": "MI",&lt;br&gt;
  "year": 3,&lt;br&gt;
  "cgpa": 8&lt;br&gt;
}&lt;br&gt;
{&lt;br&gt;
  "student_id": "S005",&lt;br&gt;
  "name": "nidh",&lt;br&gt;
  "age": 20,&lt;br&gt;
  "department": "CIVIL",&lt;br&gt;
  "year": 2,&lt;br&gt;
  "cgpa": 8.8&lt;br&gt;
}&lt;br&gt;
{&lt;br&gt;
  "student_id": "S006",&lt;br&gt;
  "name": "rohi",&lt;br&gt;
  "age": 22,&lt;br&gt;
  "department": "CSE",&lt;br&gt;
  "year": 3,&lt;br&gt;
  "cgpa": 9.2&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sjwkfpkp1f5nxprxgvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sjwkfpkp1f5nxprxgvf.png" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;br&gt;
🔍 READ (QUERY)&lt;/p&gt;

&lt;p&gt;Let’s fetch data from our collection based on certain conditions.&lt;/p&gt;

&lt;p&gt;Find students with CGPA greater than 8:&lt;/p&gt;

&lt;p&gt;db.students.find({ cgpa: { $gt: 8 } })&lt;/p&gt;

&lt;p&gt;Find students belonging to the Computer Science department:&lt;/p&gt;

&lt;p&gt;db.students.find({ department: "CSE" })&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt7mny2kzzvl1kwsbr4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt7mny2kzzvl1kwsbr4w.png" alt=" " width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
🛠 UPDATE&lt;/p&gt;

&lt;p&gt;We can update student details such as CGPA or department.&lt;br&gt;
For example, to update a specific student’s CGPA:&lt;/p&gt;

&lt;p&gt;db.students.updateOne(&lt;br&gt;
  { student_id: "S005" },&lt;br&gt;
  { $set: { cgpa: 9.0 } }&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesziah61gbft57fvjcc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesziah61gbft57fvjcc4.png" alt=" " width="800" height="468"&gt;&lt;/a&gt;&lt;br&gt;
❌ DELETE&lt;/p&gt;

&lt;p&gt;To remove a student record from the database, use:&lt;/p&gt;

&lt;p&gt;db.students.deleteOne({ student_id: "S004" })&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka582uhgth4cd28vs69l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka582uhgth4cd28vs69l.png" alt=" " width="800" height="453"&gt;&lt;/a&gt;&lt;br&gt;
🧾 RESULT&lt;/p&gt;

&lt;p&gt;You’ll now see that the inserted, queried, updated, and deleted data reflect directly in your MongoDB Atlas Dashboard — making it easy to visualize every operation.&lt;/p&gt;

&lt;p&gt;🧠 CONCLUSION&lt;/p&gt;

&lt;p&gt;We’ve successfully explored CRUD operations in MongoDB using a practical student database. From inserting and querying to updating and deleting, you now have a clear understanding of how to manage data in MongoDB effectively.&lt;/p&gt;

&lt;p&gt;These CRUD fundamentals are the backbone of most applications — whether managing users, products, or student records.&lt;/p&gt;

&lt;p&gt;A special thanks to Mr. &lt;a class="mentioned-user" href="https://dev.to/santhoshnc"&gt;@santhoshnc&lt;/a&gt; for guiding and inspiring us to explore new technologies and keep learning! 🚀&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>database</category>
      <category>sql</category>
    </item>
    <item>
      <title>STUDENT MANAGEMENT SYSTEM</title>
      <dc:creator>Iniko</dc:creator>
      <pubDate>Fri, 22 Aug 2025 09:42:29 +0000</pubDate>
      <link>https://forem.com/iniko/student-management-system-2ljb</link>
      <guid>https://forem.com/iniko/student-management-system-2ljb</guid>
      <description>&lt;p&gt;🚀 Creating SQL Queries in Oracle LiveSQL for a DBMS Assignment&lt;/p&gt;

&lt;p&gt;In this blog, I’ll walk you through how I created and executed SQL queries for my DBMS Assignment using Oracle LiveSQL. This will cover everything from table creation to running queries and verifying results with screenshots.&lt;/p&gt;

&lt;p&gt;🔑 Step 1: Setting up the Tables&lt;/p&gt;

&lt;p&gt;The assignment required creating three core tables: Students, Courses, and Enrollments. I logged into Oracle LiveSQL and used the SQL Worksheet.&lt;/p&gt;

&lt;p&gt;Here’s an example for the Enrollments table:&lt;/p&gt;

&lt;p&gt;CREATE TABLE Enrollments (&lt;br&gt;
    EnrollID NUMBER PRIMARY KEY,&lt;br&gt;
    StudentID NUMBER REFERENCES Students(StudentID),&lt;br&gt;
    CourseID NUMBER REFERENCES Courses(CourseID),&lt;br&gt;
    Grade CHAR(2)&lt;br&gt;
);&lt;/p&gt;

&lt;p&gt;✅ After running the command, Oracle confirmed that the table was created successfully.&lt;/p&gt;

&lt;p&gt;✍️ Step 2: Inserting Data into Students&lt;/p&gt;

&lt;p&gt;Next, I inserted sample data into the Students table. This helps later when running queries like SELECT, JOIN, and GROUP BY.&lt;/p&gt;

&lt;p&gt;INSERT INTO Students (StudentID, Name, Dept, DOB, Email)&lt;br&gt;
VALUES (1, 'Alice', 'CSE', DATE '2003-05-12', '&lt;a href="mailto:alice@example.com"&gt;alice@example.com&lt;/a&gt;');&lt;/p&gt;

&lt;p&gt;INSERT INTO Students (StudentID, Name, Dept, DOB, Email)&lt;br&gt;
VALUES (2, 'Bob', 'ECE', DATE '2002-11-23', '&lt;a href="mailto:bob@example.com"&gt;bob@example.com&lt;/a&gt;');&lt;/p&gt;

&lt;p&gt;INSERT INTO Students (StudentID, Name, Dept, DOB, Email)&lt;br&gt;
VALUES (3, 'Charlie', 'MECH', DATE '2004-02-01', '&lt;a href="mailto:charlie@example.com"&gt;charlie@example.com&lt;/a&gt;');&lt;/p&gt;

&lt;p&gt;Each INSERT executed successfully, and I verified it by checking the Script Output.&lt;/p&gt;

&lt;p&gt;🛠 Step 3: Altering the Students Table&lt;/p&gt;

&lt;p&gt;The assignment asked to add a PhoneNo column. I used:&lt;/p&gt;

&lt;p&gt;ALTER TABLE Students&lt;br&gt;
ADD PhoneNo NUMBER(10);&lt;/p&gt;

&lt;p&gt;✅ The column was added successfully.&lt;/p&gt;

&lt;p&gt;🔍 Step 4: Running Queries&lt;/p&gt;

&lt;p&gt;Now for the fun part! I ran multiple queries to meet assignment requirements.&lt;/p&gt;

&lt;p&gt;a) Select with Functions&lt;br&gt;
SELECT UPPER(Name) AS StudentName, LENGTH(Email) AS EmailLength&lt;br&gt;
FROM Students;&lt;/p&gt;

&lt;p&gt;This returned student names in uppercase and the length of their email IDs.&lt;/p&gt;

&lt;p&gt;b) Join Operation&lt;/p&gt;

&lt;p&gt;To display students with their enrolled courses and grades:&lt;/p&gt;

&lt;p&gt;SELECT s.Name AS StudentName, c.CourseName, e.Grade&lt;br&gt;
FROM Students s&lt;br&gt;
JOIN Enrollments e ON s.StudentID = e.StudentID&lt;br&gt;
JOIN Courses c ON e.CourseID = c.CourseID;&lt;/p&gt;

&lt;p&gt;At this stage, since no enrollment data was added, the result was empty. (In practice, I would insert course and enrollment data first.)&lt;/p&gt;

&lt;p&gt;📸 Step 5: Screenshots for Submission&lt;/p&gt;

&lt;p&gt;Throughout the process, I took screenshots of:&lt;/p&gt;

&lt;p&gt;Table creation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizkh7mi8n6sv1b5vns2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizkh7mi8n6sv1b5vns2a.png" alt=" " width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data insertion&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffghaalqj2xt9nvynrhme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffghaalqj2xt9nvynrhme.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;
Alter table&lt;/p&gt;

&lt;p&gt;Function queries&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdq2jujyt77sn8bpw9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdq2jujyt77sn8bpw9f.png" alt=" " width="800" height="377"&gt;&lt;/a&gt;&lt;br&gt;
Join queries&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwbadrv773wkraay0j5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwbadrv773wkraay0j5h.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;br&gt;
These were required as proof of execution for the assignment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4ulngligi0v5pumn3vd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4ulngligi0v5pumn3vd.png" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;br&gt;
🎯 Conclusion&lt;/p&gt;

&lt;p&gt;Using Oracle LiveSQL made it simple to:&lt;/p&gt;

&lt;p&gt;Create and manage tables&lt;/p&gt;

&lt;p&gt;Insert and modify data&lt;/p&gt;

&lt;p&gt;Run queries and view results instantly&lt;/p&gt;

&lt;p&gt;Collect screenshots as evidence for submission&lt;/p&gt;

&lt;p&gt;This workflow not only completed my DBMS assignment but also strengthened my SQL skills with hands-on practice. 🚀&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
